Jump to content

runraid

Members
  • Content Count

    86
  • Joined

  • Last visited

Everything posted by runraid

  1. Here's a reddit thread on this. There's numerous people suffering from SQLite DB corruption since upgrading to 6.7.0, myself included. For me, Plex, Sonarr, Radarr, and Tautulli all had corrupted databases multiple times. Moving to /mnt/disk1 does solve the issue. I never had this problem prior to 6.7.0. It wasn't until I upgraded until this happened.
  2. One additional note, /mnt/cache works as well, same permissions issue. add "chmod +w *" on the db files fixed my loading issue. I'll run it on the cache drive for a few days. I guess with this, I'll need to backup to the array manually?
  3. I was able to get it up and running on /mnt/disk3. Problem was permissions on the db files. Added +w and I'm good there now. I'll see how things go for a few days and report back.
  4. No luck when using /mnt/cache. I copied everything there and the corruption still occurs, even if I restore from a good copy.
  5. Thank you very much. I’m out right now but will test this as soon as I return and update this thread with the results.
  6. I've been facing an issue where my com.plexapp.plugins.library.db DB corrupts. I've narrowed it down with this repro: Download a new TV show Place it in my TV media directory Plex will notice a new show was added and start to add it to my Plex library The DB will corrupt I then need to restore my db files from the previous days backup. I then click "Scan Library Files" in Plex to find the missing TV show, it finds and adds it without issue. Things work great until it automatically finds new media. This is how I restore: DATE="2019-06-01" rm -f com.plexapp.plugins.library.db-shm rm -f com.plexapp.plugins.library.db-wal rm -f com.plexapp.plugins.library.blobs.db rm -f com.plexapp.plugins.library.db cp com.plexapp.plugins.library.blobs.db-$DATE com.plexapp.plugins.library.blobs.db cp com.plexapp.plugins.library.db-$DATE com.plexapp.plugins.library.db Anyone have any idea of what could be going on? I've tried changing my config directory from /mnt/user/appdata/plex to /mnt/user/cache/plex and that causes 100% corruption 100% of the time. I can't even get Plex to start when I point at /mnt/user/cache/plex. I've ran SMART on the disk that contains the DB and there are no errors. I run a VM on the SSD cache drive and have no issues. My cache drive (SSD) has this SMART self-test history: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Vendor (0x60) Completed without error 00% 4108 - # 2 Offline Completed without error 00% 0 - # 3 Offline Completed without error 00% 0 - # 4 Offline Completed without error 00% 0 - # 5 Vendor (0x54) Completed without error 00% 0 - # 6 Vendor (0xb3) Unknown status (0xe) 60% 211 - # 7 Vendor (0x9d) Unknown status (0xe) 50% 57544 - # 8 Vendor (0xa5) Unknown status (0xe) 50% 57032 - # 9 Offline Completed without error 00% 0 - #10 Offline Completed without error 00% 64 - #11 Offline Completed without error 00% 0 - #12 Offline Completed without error 00% 0 - #13 Offline Completed without error 00% 0 - And this SMART error log: Warning: ATA error count 2830 inconsistent with error log pointer 4 ATA Error Count: 2830 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2830 occurred at disk power-on lifetime: 37228 hours (1551 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 c3 40 00 30 b3 01 Error: Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ef 02 40 00 30 b3 01 60 00:00:15.776 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.776 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.760 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.760 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.745 SET FEATURES [Enable write cache] Error 2829 occurred at disk power-on lifetime: 37228 hours (1551 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 c3 40 00 30 b3 01 Error: Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ef 02 40 00 30 b3 01 60 00:00:15.760 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.760 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.745 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.745 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.729 SET FEATURES [Enable write cache] Error 2828 occurred at disk power-on lifetime: 37228 hours (1551 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 c3 40 00 30 b3 01 Error: Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ef 02 40 00 30 b3 01 60 00:00:15.745 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.745 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.729 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.729 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.714 SET FEATURES [Enable write cache] Error 2827 occurred at disk power-on lifetime: 37228 hours (1551 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 c3 40 00 30 b3 01 Error: Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ef 02 40 00 30 b3 01 60 00:00:15.729 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.729 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.714 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.714 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.698 SET FEATURES [Enable write cache] Error 2826 occurred at disk power-on lifetime: 37228 hours (1551 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 c3 40 00 30 b3 01 Error: Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ef 02 40 00 30 b3 01 60 00:00:15.714 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.714 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.698 SET FEATURES [Enable write cache] ef 02 00 00 30 b3 00 00 00:00:15.698 SET FEATURES [Enable write cache] ef 02 40 00 30 b3 01 60 00:00:15.682 SET FEATURES [Enable write cache]
  7. Support thread for the lidarr Docker template. Application Name: lidarr Application Site: https://www.reddit.com/r/Lidarr/ Docker Hub: https://hub.docker.com/r/volikon/lidarr/ Application Github: https://github.com/lidarr Template Github: https://github.com/rroller/unraid-templates
  8. Thanks -- now the process is clear. I've update per the instructions above. No manual edits. I added my repo and tested, works greats. Can you review once more?
  9. Please configure https://github.com/rroller/unraid-templates for use with CA. Thanks! I've added a support thread here
  10. Support thread for the pgadmin Docker template. Application Name: pgadmin Application Site: https://www.pgadmin.org/ Docker Hub: https://hub.docker.com/r/fenglc/pgadmin4/ Application Github: https://github.com/postgres/pgadmin4 Template Github: https://github.com/rroller/unraid-templates Admins: Please configure https://github.com/rroller/unraid-templates for use with CA. Thanks!
  11. Thanks. I'm going to do some experimenting. I'm fine with Linux but it's nice not having that VM running and it's nice having the Unraid UI show me when there are updates to the image. I guess I could run a small vm that I use to play the audio through and have home assistant play through that. Thanks again for all of the quick replies. You all are very helpful.
  12. Thanks again. I'm wondering what my options are? I'd like to keep Home Assistant in a Docker container and not in a VM. I currently have it playing audio via a Chrome Cast Audio, but that has about a 3 second delay. I'd like to have a speaker wired up directly.
  13. I see. Thanks. Could the container have the audio drivers?
  14. Hi, how would I go about accessing the audio out jack on my server from inside a Docker container? I run the Home Assistant Docker container and I'd like to play sound during certain events but I'm not sure how to get access to the audio out inside the container. Thanks!
  15. The "dperson/nginx" container latest version no longer runs on unraid because the latest dperson/nginx enabled ipv6. Can we please have ipv6 enabled on unraid? Please see: https://github.com/dperson/nginx/commit/5f06b2b246f9e8c712d8fb8becced3cef2b37b82
  16. Should probably at least add those keywords to the wiki so people find it. Edit: looks like the word "remove" is in the wiki.
  17. What's your CPU usage look like? I had high CPU usage for several hours although nothing really interesting was happening on my machine. I found that once I restarted the Jackett container the CPU usage went back down to almost 0. Maybe try restarting containers one by one to see if any of them are consuming your CPU.
  18. Open your case and put a floor fan blowing into your PC until you get proper cooling. That many drives that hot looks dangerous.
  19. Make sure you have proper cooling in your case. Are your other drives heating up? All of my drives were running hot but once I put in additional fans things cooled downed. I'm always running less than 40c now. Heres my post on the subject
  20. Thanks @jonathanm @kizer @1812 @ashman70 I've added new fans, cleaned out the dust, and i'm running at an average of 36C now with the sides on the case.
  21. Is it possible to create an NFS or SMB share with a quota? I have a Hikvision IP camera that will write to an NFS or SMB share, but it must do what Hikvision calls a "format" by prefilling the space in the share with files. This will consume my entire unraid space. If I could create a limited share, then that would work. My other option is returning the camera and buying one that doesn't do something so silly.
  22. @jonathanm Thank you very much for the feedback. Here's my case: https://www.newegg.com/Product/Product.aspx?Item=N82E16811235056 Attached are pics of my tower.
  23. Is 45 degrees Celsius too hot for my disks? When my case is closed, the disks are sitting at 45c. If I open the case and put a floor fan on them, they drop to around 32c.