Jump to content
Rich Minear

Downgraded back to 6.6.7 due to Sqlite corruption

195 posts in this topic Last Reply

Recommended Posts

Posted (edited)
14 minutes ago, limetech said:

Can you provide link to that discussion?

I don't remember where I saw where it stayed that mergers also had the same issue. Don't think it was a discussion.

The reason I said sqlite should fix it, was that from what I have read about it now and then, sqlite was the common denominator.

 

I found this from a quick Google search. It's not where I read it, but might give you a hint?

https://github.com/trapexit/mergerfs#plex-doesnt-work-with-mergerfs

Edited by saarg

Share this post


Link to post
2 hours ago, CSmits said:

the second post starts with "Not correct."

You misread my post.  As it said, if you have problems with /mnt/user/appdata/..., you will also have problems on previous versions of unRaid.  This issue which affects some users affects them on all versions of unRaid.  Whether due to some change on 6.7 has made the issue worse or not I can't say, but the truth of the matter is that the issue has existed for all of 6.x for some users and not others.

Share this post


Link to post

I also started experiencing constant Plex database corruption after upgrading to the RC release of 6.7.0 (this was a few months ago). Unfortunately the only way to fix it was to switch the config to the cache disk. This kind of puts a dent in my plan to get rid of my cache disks since I was planning on running my array on solely NVME drives. I do hope this issue gets resolved soon.

Share this post


Link to post

On Settings/Global Share Settings there is "Tunable (enable Direct IO)" - default is set to Auto, which means No, but if you set this to Yes it might lead to problems with sqlite.  For those experiencing this issue, how do you have this set?  Just want to rule out this is the issue.

Share this post


Link to post
3 minutes ago, limetech said:

For those experiencing this issue, how do you have this set?

I have this set on Auto 👍

Share this post


Link to post
8 minutes ago, limetech said:

On Settings/Global Share Settings there is "Tunable (enable Direct IO)" - default is set to Auto, which means No, but if you set this to Yes it might lead to problems with sqlite.  For those experiencing this issue, how do you have this set?  Just want to rule out this is the issue.

Mine is also set on Auto.

Share this post


Link to post
8 hours ago, CHBMB said:

You don't have to find the problem, we've told you how to fix it.  Regardless of whether you have a cache drive or not, ideally your appdat should be confined to a single disk (prevents keeping the whole array span up all the while) If that is disk1 disk2 or disk9 it doesn't matter.

 

Use /mnt/disk1/appdata

 

Same end result.  Fixes the issue

I don't have a cache drive so I'm going to change my current setup from /mnt/user/appdata to /mnt/disk1/appdata. Do I need to stop all docker containers, then edit each container? Also, do I need to change Default appdata storage location under settings/docker?

Share this post


Link to post
1 hour ago, Lonnie LeMaster said:

Do I need to stop all docker containers, then edit each container

Yes

 

1 hour ago, Lonnie LeMaster said:

Also, do I need to change Default appdata storage location under settings/docker?

You don't need to, but you should

Share this post


Link to post
3 hours ago, Squid said:
4 hours ago, Lonnie LeMaster said:

Also, do I need to change Default appdata storage location under settings/docker?

You don't need to, but you should

With the docker service disabled, the dropdown file picker does not give me the option to choose disk1. Only allows me to choose user.

Share this post


Link to post
Posted (edited)

I just got corrupted again, even when routing to /mnt/disk3.

Jun 05, 2019 19:32:48.284 [0x1482e0d63700] ERROR - SQLITE3:(nil), 11, database corruption at line 79051 of [bf8c1b2b7a]
Jun 05, 2019 19:32:48.284 [0x1482e0d63700] ERROR - SQLITE3:(nil), 11, statement aborts at 9: [select statistics_bandwidth.id as 'statistics_bandwidth_id', statistics_bandwidth.account_id as 'statistics_bandwidth_account_id', statistics_bandwidth.device_id as 'statistics_bandwidt
Jun 05, 2019 19:32:48.284 [0x1482e0d63700] ERROR - Thread: Uncaught exception running async task which was spawned by thread 0x1482e3428700: sqlite3_statement_backend::loadOne: database disk image is malformed
Jun 05, 2019 19:32:48.285 [0x148289c6b700] ERROR - SQLITE3:(nil), 11, database corruption at line 79051 of [bf8c1b2b7a]
Jun 05, 2019 19:32:48.285 [0x148289c6b700] ERROR - SQLITE3:(nil), 11, statement aborts at 9: [select statistics_bandwidth.id as 'statistics_bandwidth_id', statistics_bandwidth.account_id as 'statistics_bandwidth_account_id', statistics_bandwidth.device_id as 'statistics_bandwidt
Jun 05, 2019 19:32:48.286 [0x148289c6b700] ERROR - Thread: Uncaught exception running async task which was spawned by thread 0x1482e3227700: sqlite3_statement_backend::loadOne: database disk image is malformed

And here's come config:

plex.thumb.png.d2c0dda06c37237345160281bf9397d2.png

Edited by runraid

Share this post


Link to post
17 minutes ago, Lonnie LeMaster said:

With the docker service disabled, the dropdown file picker does not give me the option to choose disk1. Only allows me to choose user.

Just type it in

 

/mnt/disk1/appdata/

Share this post


Link to post
5 hours ago, Squid said:

Just type it in

 

/mnt/disk1/appdata/

Encountering the same problem. I've mapped all my containers' /config to /mnt/disk2/appdata. Changed all my shares to include only one disk. Made sure there are no lingering files on either disk that are not supposed to be there.

 

I had switched back to 6.6.7 for short of a week and had no problems. I upgraded to stable custom nvidia unraid 6.7.0 yesterday and when I woke up not 8 hours, my sonarr database got corrupted again.

 

Any other ideas what may be causing this?

Share this post


Link to post
Encountering the same problem. I've mapped all my containers' /config to /mnt/disk2/appdata. Changed all my shares to include only one disk. Made sure there are no lingering files on either disk that are not supposed to be there.
 
I had switched back to 6.6.7 for short of a week and had no problems. I upgraded to stable custom nvidia unraid 6.7.0 yesterday and when I woke up not 8 hours, my sonarr database got corrupted again.
 
Any other ideas what may be causing this?



Just curious, do you have any volumes mapped to your containers using the unassigned devices plugin?

Share this post


Link to post

I have all my appdata and my docker.img mounted on an "unassigned device" and have done since I started using V6 of Unraid in 2015 I think it was.

Sent from my Mi A1 using Tapatalk

Share this post


Link to post

Upgraded to 6.7.0 again and moved my dockers to /mnt/disk1. Will  see if any corruption occurs.

Share this post


Link to post
I have all my appdata and my docker.img mounted on an "unassigned device" and have done since I started using V6 of Unraid in 2015 I think it was.

Sent from my Mi A1 using Tapatalk




I had a volume mapped to plex and sonarr via unassigned devices plugin. Seemed like every time the appdata backup plugin ran, sonarr or plex would be corrupted after. Don’t know if this is related but it hasn’t happened since I stopped using the unassigned devices plugin.


Sent from my iPhone using Tapatalk

Share this post


Link to post
12 minutes ago, Lonnie LeMaster said:

I had a volume mapped to plex and sonarr via unassigned devices plugin. Seemed like every time the appdata backup plugin ran, sonarr or plex would be corrupted after. Don’t know if this is related but it hasn’t happened since I stopped using the unassigned devices plugin.

 

hmmm... could be the clue we are after?

Share this post


Link to post
4 minutes ago, testdasi said:

hmmm... could be the clue we are after?

So you are saying it's all @Squids fault? 😛

Share this post


Link to post
Posted (edited)

/me gets the pitchforks and torches out

Edited by binhex

Share this post


Link to post
Posted (edited)

I have the unassigned devices plugin installed but I’m not using it. I’ve been fighting corruption since I upgraded to 6.7.0. Switched to /mnt/disk3 and I was good for nearly a week then my SQLite db files corrupted yesterday. 

 

Maybe I’ll try uninstalling the plugin. 

Edited by runraid

Share this post


Link to post
Posted (edited)
16 minutes ago, saarg said:

So you are saying it's all @Squids fault? 😛

No, it's octopus' fault. 😂

 

On a serious note though, that sorts of fit together (of course, if it's not just treat my post as random fart 😅)

  • It's known that the mover can corrupt data if it's being run while the file being moved to the array is being accessed at the same time (e.g. a Plex media scan). Then it wouldn't be surprising if the appdata backup procedure interferes with the sqlite writing procedure and causes data corruption.
  • I noticed the corruption seems to be reported to happen overnight, when it's highly probable that multiple simultaneous processes are run (e.g. Plex rescanning media, appdata backup, the mover etc.).
  • The fact that some users use /mnt/user adds red herring to the situation i.e. we have 2 potential sources of corruption, the /mnt/user and the simultaneous read + write.

Could be easy to test the hypothesis I reckon.

Edited by testdasi

Share this post


Link to post
5 minutes ago, runraid said:

I have the unassigned devices plugin installed but I’m not using it. I’ve been fighting corruption since I upgraded to 6.7.0. Switched to /mnt/disk3 and I was good for nearly a week then my SQLite db files corrupted yesterday. 

 

Maybe I’ll try uninstalling the plugin. 

 

Do you use the backup appdata plugin?

Share this post


Link to post
Just now, testdasi said:

No, it's octopus' fault. 😂

 

On a serious note though, that sorts of fit together (of course, if it's not just treat my post as random fart 😅)

  • It's known that the mover can corrupt data if it's being run while the file being moved to the array is being accessed at the same time (e.g. a Plex media scan). Then it wouldn't be surprising if the appdata backup procedure interferes with the sqlite writing procedure and causes data corruption.
  • I noticed the corruption seems to be reported to happen overnight, when it's highly probably that multiple simultaneous processes are run (e.g. Plex rescanning media, appdata backup, the mover etc.).
  • The fact that some users use /mnt/user adds red herring to the situation i.e. we have 2 potential sources of corruption, the /mnt/user and the simultaneous read + write.

Could be easy to test the hypothesis I reckon.

 

That sounds plausible.

I thought the appdata backup stopped the containers first, but I'm probably wrong and the wrath of @Squid awaits...

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.