Jump to content

dustinr

Members
  • Content Count

    17
  • Joined

  • Last visited

Community Reputation

0 Neutral

About dustinr

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. oh yea, that was a type, it is /mnt/user/ please disregard.
  2. nah, that's just a usb backup drive.
  3. The /mnt/usr/specialappdata is not a typo, previously people talked about isolating the appdata to a single disk, so i created a share just for that purpose, and it only lives on DISK1. The 100G docker image was from a misconfigured container install years ago that i just haven't got around to resizing.
  4. updated 4 nights ago, just noticed corruption tonight, but it looks like it started a night or two ago. just for clarification all dockers live on /mnt/usr/specialappdata/ which is confined to a single disk / not running any parity. tower-diagnostics-20190923-0157.zip
  5. so i updated to rc4 earlier in the day, does it include the rc3 update as well ? or should i just downgrade to rc3 ? thanks
  6. well, sonarr finally corrupted running the latest RC2 * no parity drive * no cache drive Sonarr corrupted after about a week. restoring the appdata backup to a few days ago and rolling back to previous unraid, i cant beta test anymore lol tower-diagnostics-20190824-1805.zip
  7. _IF_ your feeling adventurous remove your parity drive from your array and see if corruption occurs on RC2. Ive been running perfect since i deleted my parity drive.. If nothing else it would be a good test to correlate the issue.
  8. WELL, i did see increased IO, but i think thats PRIMARILY because i deleted my PARITY drive and just made it part of my storage array. unRaid is performing GREAT NOW. and i haven't had any corruption in three days. I think all of my issues stem from the parity drive. is there anything on the road map for snapraid ? (or something similar..) I think my big bottleneck is writing parity drive data synchronously on OLD HARDWARE / OLD HARDDRIVES (2010-2018). EDIT: In the interest of SCIENCE. I am upgrading my unraid from 6.6.7 to the new 6.7.3 rc2 and continuing to run without parity and without cache.
  9. Yea, i rolled back, AND removed my parity drive (for better performance / more space...), so im not really sure how much of a help i will be to you guys. But im going to keep a REAL close eye on my SQL for radarr/sonarr and will let you guys know if i see corruption on the 6.6.x branch as well. thanks -DCR
  10. Just a quick update I rolled back to unraid6.6.7, and turned my parity drive into a data drive(my thought process is that I have been getting such terrible IO speeds that maybe the parity is my issue...) Gonna reload all these docker images one by one and rebuild the DB and let you guys know how it goes. -DR
  11. sonarr decided to shit the bed now too... restoring from backup now... what further testing can i do ? im tempted to roll back, but i would love to help fix this bug.. also be advised, im having the same issues as described in this thread as well: let me know how to proceed.
  12. is there a good guide to roll back to 6.6.x ? also what features will i lose (im on latest rc). -DCR
  13. i couldnt test plex since im remote, but is this typically what the corruption looks like from the log file (retry db?)
  14. Interesting you say that, because i just rebuilt earlier, and the only docker image that now has sqlite corruption is my RADARR image... tower-diagnostics-20190812-1630.zip
  15. throwing my hat in the ring, I just got my unraid box BACK running (had to switch out motherboard). same shares / same unraid / same harddrives as before. Booted up unraid (after it being offline for about 2 months) all my sqlite dockers had corruption. Went ahead and updated to latest RC today (6.7.3-rc2) Tried to recover SQL Lite databases... NO DICE rebuilt PLEX / SONARR / RADARR FROM SCRATCH..... (ugh...) created a dedicated appdata under /mnt/disk1/specialappdata/ Its been running for 8 hours no corruption. If i see anything i will update here.. is there anymore detailed info you need ?