• Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About simalex

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry to report that DB corruption appeared in Sonarr. As I mentioned before, with the proposed setup the manual import process was taking too long if I at the same time I was doing another file transfer. So first two rounds of manual import (around 300GBs) where actually done with no other loads on the server. Third round I started a manual import of another 80GBs of media, and a concurrent file transfer of another 600GBs to the server, and a series refresh in Sonarr. File transfer speed was as before ranging from 50-70MB/s The actual import speed was a
  2. Hi all I upgraded to 6.8.0 rc3 and set Disk Settings->Tunable (scheduler) to None. Sonarr docker was clean installed, removing everything existing trace from appdata. Then started my normal Sonarr manual import routine to force load the system. At the same time I started copying data from my other unRaid server to the new one, via my desktop. Total data to be copied around 300GB in 800 files Two things I noticed. a. First time around, I started the manual import of a season, and then started the file transfer. The file transfer this case did not seem to
  3. Ok finished setup of the new server yesterday Installed only 2 dockers Sonarr (linuxserver version) and Plex (Plex media version) Started copying a few series from my previous server to the download dir of the new one Once the first series finished copying I started manual import. Everything completed fine. Second series finished copying I attempted to start manual import and immediately got a database malformed in Sonarr. At the same time I was still copying other files to the new server. This is now from a clean install, DBs for both Sonarr
  4. Don't see how this could have happened. This was the same DB that had been working from a restore taken since before upgrading to 6.7.2. The Sonarr logs had not had any corruption message for at least 2 months. Also the malformed database messages where actually linked to some series episodes that literally aired Sunday night after I had upgraded to 6.8 and Sonarr was trying to decide to put them in the download list. You did get me to wondering though so I did the following a. check the DB backup before upgrading to 6.8 for corruptions -- no errors came
  5. Sonarr is importing from the download area where deluge or nzbget placed the files and moving media to the same directories that Plex Media library resides. This is a common setup. Plex as far as I understand gets notified that there was a change in 2 ways. a. Sonarr is connecting to Plex and notifying Plex of any changes it made to the media library (additions, deletions, replacement) b. Plex is also monitoring the media directories and when it detects a change will trigger internally a library re-scan. So when you import a new media file through Sona
  6. was using linuxserver I have both dockers installed with different paths for their config files binhex has /config in /mnt/disk1/appdata/binhex-sonarr linuxserver has /config in /mnt/disk1/appdata/sonarr I will do the same load test with binhex before downgrading to see if the corruption happens there also. The reason I moved back to linuxserver is that the backup in binhex is somehow broken and access to /tmp/nzbdrone_backup is for some reason denied. I don't know the reason for this, maybe it is related to the fact that I restored the DB to binhex docke
  7. Hello all Sorry to report that I just witnessed the DB corruption on my first attempt to stress the server in 6.8.0rc1 The test was as follows Plex was scanning the Library Sonarr was importing around 10GB worth of media, and is set to notify Plex upon successful import of a library item. At the same time I was moving 180GB of data through the network to a single disk of the array via a disk mount. The performance was good as I was monitoring the data transfer. It only seemed to slow very much at one point, where I am guessing some of t
  8. Just upgraded to 6.8.0 rc1 Have been running without any corruption for the last 5 hours now. I did not have the opportunity to overload the unRaid server yet. Will try this tomorrow. Keeping my fingers crossed.
  9. Hi all Has anyone tried unRaid 6.8.0 rc1 yet, to see if the corruption issue is fixed. Release notes don't mention anything about fixing this problem, but one can only hope
  10. I think it's more that when a sector is written on a data drive then for parity to be consistent the same sector needs to be updated near real time as well on the parity drive. The individual drives don't understand this concept and allowing a drive to update in which ever order it chooses would increase the chance of the parity drive being out of sync with the actual data drives, especially when you have updates in multiple data drives simultaneously. Imagine having to update sector 13456 on drive 3 and sector 25789 on drive 4 in that order and then the parity drive deciding that is shou
  11. Well it should not. This corruption is limited to SQLite Database files. It seems to be manifesting on the SQLite databases when there is other heavy I/O workload on the server and at the same time the Docker application (Plex, Sonarr/Radarr) is running parallel tasks that need to update the database concurrently. For other use cases like reading/copying/moving or updating files I did not have any problem, and I was doing massive copying/ moving files around above the 200-250GB per day mark, as I am trying to re-organize my backups. My backup .pst file alone is in excess of 25GB, an
  12. What I do for checking the Sonarr DB is periodically go through the Logs filtering out everything but the errors. If there is a corruption in the DB you will see the malformed message there. Once I have gone through a log set, I will just clear the logs as well. Sonarr initially still seems to be working properly, when in fact the DB has only few corruptions. Once the number of corruptions increases then Sonarr starts initially showing slow responsiveness issues, until it reaches a point where you can't even get to the landing page. Anyway. When doing manual backups, unl
  13. Downgraded again to 6.6.x version as, at least for now, my use case involves periods of heavy I/O load on the server. The problem as far as I can pinpoint it is related concurrent writes by more than one threads/processes of the same file under heavy I/O load. It is obvious that under certain load circumstances the updates are not being applied to the file in the proper sequence, meaning that a disk section that has been in theory updated by process A and then process B, is actually getting written to disk first process B and then process A. leaving the actual file in an inconsiste
  14. Same results with 6.7.3-rc2. After the upgrade everything worked properly for a while, no SQLite corruptions. Almost one hour after starting the manual import in Sonarr, again using a set of 3-4GB media files, the corruption issue appeared. The only difference is that this time both Plex and Sonarr have database corruptions. First Sonarr DB was corrupted and a several minutes later so was Plex. As far as I understand the corruption is happening when there is a heavy load on the unRaid server e.g. copying large files from one disk of the array to another. As I
  15. OK So I switched to binhex docker for Sonarr (instead of linuxserver) and plexinc docker for Plex (instead of limetech which was anyway deprecated) Then I upgraded again to unRaid 6.7.2 I did not rebuild the databases, instead backed up appdata and pointed the new dockers to the old paths. The system was running for almost 3 days straight without any SQLite corruption. However, for the first 3 days I did not do any heavy lifting. That is only few new TV episodes were added and those sporadically Then I decided to force heavy load on both Sonarr &