Abzstrak

Members
  • Posts

    110
  • Joined

  • Last visited

Everything posted by Abzstrak

  1. yeah, I've been keeping hourly backups of my database, makes it easier to restore to where I need to. just script copying it hourly to a different folder, works fine. I just have been running this command hourly (using user scripts addon). Obviously you'll need to create the dbbackups folder. tar zcvf /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/dbbackups/com.plexapp.plugins.library.db-$(date +%A-%H%M).tar.gz /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db
  2. you can use the storage while parity is being created, obviously sans any redundancy until complete. Be aware that you'll probably top out around 50MB/s unless you enable turbo mode, even then you'll have a hard time getting twice that speed on average. I copied over 12TB and it took 3 days over gigabit... keep in mind I never saw gigabit speed except in small bursts. I also didn't know to enable turbo mode until part way through. Cache won't help with this transfer, unless you have 8TB+ in cache Getting used to the way things work will take a bit of time too, I'd suggest playing with a spare machine to get an idea of how things work to speed up the transition.
  3. yeah, kinda not cool in that we cant test anything easily. I just grabbed the binary off my desktop and stuck it in my folder I have been testing db consistency with, seemed fine.
  4. you can use hdparm. I have a similar issue on my drives, I added the following line to /boot/config/go hdparm -W 1 /dev/sd[b,c,d,e,f,g] obviously change according to your needs.
  5. Can you add sqlite3? Apparently it's been removed from unraid 6.7.1 forward
  6. Why am I getting this now? sqlite3: command not found Sure makes checking for sqlite corruption hard guys....
  7. I just upgraded to 6.7.1 only because of this statement in the announcement: "We are also still trying to track down the source of SQLite Database corruption. It will also be very helpful for those affected by this issue to also upgrade to this release ."
  8. Regarding this addition: shfs: support FUSE use_ino option How do we enable this? Or is this default now? Also, I ran the upgrade, all ok so far. I noticed the enable direct io was set back to auto, I put mine back at "no" for now.
  9. Perhaps, but I don't think 1 week is sufficient to determine stability. I don't trust it yet
  10. megaraid will obfuscate it, you'll never get it in a normal way... storcli can probably pull data from there, not sure how to do that easily through unraid. Also, good luck getting anything from that owc box. using weird crap will get you weird results or = lots of work. best of luck.
  11. cmon man, trying making a new thread instead of this necromancing of someone else thread with a similar, but not same, issue.
  12. unraid's "array" is closer to RAID 3 than anything. This does allow for easier data recovery and more power savings... for smaller amounts of data, like at home, this is great. It allows for adding drives of dissimilar size, which for most of us home users is really great too. However, this SEVERELY limits the IOPS (due to single disks handling parity and not striping it). You will never be faster than the slowest, single drive in your "array" I was pretty skeptical, but it is pretty great for home use for general use. My "array" drives are all enterprise class 7200 rpm NL drives with 128MB cache. They are all matched. The maximum transfer I can get in regular mode is about 52MB/s, in turbo mode is about 102MB's. That's really pretty bad compared to RAID. That cache pool is a bandaid to make up for these limitations, and works fine. It saves power too. I would suggest never running a VM from this array, as it really runs like crap.... but if you run VM's from a BTRFS pool, it runs good. You have to weigh the pros and cons. I have a larger, dual xeon server too with 8 drives on a caching controller, but it sucks down all kinds of power, generates heat and makes a great deal of noise. It runs ESXi, and is great when I'm labbing things... but I wanted to stop running it all the time. My new unraid box is an mITX i3 that runs most of the time with none or one drive spun up. and is dead silent, and pulls about 28W on average. The wife approval factor is significantly higher on the unraid box running 24/7 If you want something with a more traditional setup and to run as a NAS with a pretty web interface, I'd suggest Open Media Vault (OMV) as it's based on Debian, supports dockers and VM, btrfs, ZFS, XFS EXT4 and whatever else and runs well. You could also run snapraid with mergefs on it to get a similar experience to unraid in being able to tack together dissimilar drives, albeit with delayed parity updates. Also, it's FOSS and free to use.
  13. it's been a week now here since I added a cache pool, moved the appdata to cache only and set it direct io to "no"... I keep randomly checking the consistency of the database, it's still coming back "ok". Cautiously optimistic. I've added a number of videos and recorded some via dvr, no problems as of yet. I am on a new install, built the machine about 4 weeks ago. What CPU? How much RAM? Array config? -Asrock H370ITX/ac with i3-8300 -32GB DDR4 (2x 16GB sticks) -6x SATA 4TB Seagate Constellation Drives (double parity array, 16TB useable) -2x Intel 660p 512GB NVMe (BTRFS pool, raid 1) Roughly how large is your collection? I think file count, even a rough estimate, is more important here. - approximately 26500 files in plex collection across 5 libraries Have you set your appdata to /mnt/cache (or for those without cache, /mnt/disk1)? If you haven't, we'll ignore you. -Was set to /mnt/user/appdata originally. This corrupted in less than 24 hours usually. -I moved appdata and system to all one drive, mapped appdata in plex docker to /mnt/disk2/appdata. This was better, would last about 72 hours. -I then moved the transcode folder to a tmpfs drive of 24GB max since I noticed corruptions mainly around larger DVR recording nights (5 or more shows). This lasted 5 days before I saw corruption. -I added the cache array (didn't have it before) and then moved the appdata and system entirely to cache only. I set the docker to /mnt/cache/appdata. I also, at the same time, set the direct io from "auto" to "no". It has been 1 week now with no corruption. I'm not trusting it at all. Do you have a link between Sonarr and Plex? If yes, have you disable it? If you haven't, we'll ignore you. I only use plex. The goal was to switch everything to unraid, but this issue has hindered that effort greatly. Until this is stable, I have only plex and syncthing dockers. Do you have automatic library update on change / partial change? If yes, have you set it to hourly? If you haven't we'll ignore you. Yes. all check boxes are checked in plex except to music in automatic updates. library scan interval is 24 hours. This is more controversial. Can you rebuild your db from scratch? This was a from scratch build. I've run Plex for quite some time, never ever saw corruption before coming to unraid 4 weeks ago. Still on trial due to this issue alone. If I rebuild again it will not be on unraid, I'll probably go back to Debian or OMV, this is mainly due to the annoyances and monotony with redoing DVR. <add more points as things progress>  I am mapping over /dev/dri to the container for intel gpu decoding in plex. Not sure who else is doing this, not sure if it's related.
  14. probably whatever you did to try to format it before hand, it doesn't like that... no reason to do that either, unraid will take care of it. I'm fairly new to unraid, I don't know if their is a preferred way to do this in the gui. I would just ssh in and zero the first 10GB or so of the drive, run partprobe and then use the gui to set things up like normal.
  15. it's been 5 days here since I went cache only with appdata and set it direct io to "no"... I keep randomly checking the consistency of the database, it's still coming back "ok". Cautiously optimistic. I've added a number of videos and recorded some via dvr, no problems as of yet. I've been running hourly backups due to this issue... I'm on the verge of writing a script to verify the database and email me if it corrupts, but I've been feeling lazy. Shouldn't be too hard though to do hourly backups, verify consistency and if it goes bad then email the admin with the name of the last good backup from an hour prior. Auto restoring it wouldn't be that hard either, would just be a trick to make sure that plex wasn't doing something at the time(like recording something on the dvr).
  16. you're using USB? yeah, that will make it slower for alot of reasons and the overhead usb bring along... if you are going to use usb, expect it to be alot slower than if it were sata/sas.
  17. Sorry, I thought you said SSD, not NVMe, reading too fast However smartctl can output on an NVMe, info and example --> https://www.smartmontools.org/wiki/NVMe_Support Most (but some) nvme don't support the sata/sas type commands for smart, but it really would be nice if unraid would just run a smartctl -x on an nvme and give us that. example on my box- # smartctl -x /dev/nvme1n1 smartctl 7.0 2018-12-30 r4883 [x86_64-linux-4.19.41-Unraid] (local build) Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Number: INTEL SSDPEKNW512G8 Serial Number: BTNH90920C3Y512A Firmware Version: 002C PCI Vendor/Subsystem ID: 0x8086 IEEE OUI Identifier: 0x5cd2e4 Controller ID: 1 Number of Namespaces: 1 Namespace 1 Size/Capacity: 512,110,190,592 [512 GB] Namespace 1 Formatted LBA Size: 512 Local Time is: Mon Jun 17 08:36:10 2019 CDT Firmware Updates (0x14): 2 Slots, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Maximum Data Transfer Size: 32 Pages Warning Comp. Temp. Threshold: 77 Celsius Critical Comp. Temp. Threshold: 80 Celsius Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 3.50W - - 0 0 0 0 0 0 1 + 2.70W - - 1 1 1 1 0 0 2 + 2.00W - - 2 2 2 2 0 0 3 - 0.0250W - - 3 3 3 3 5000 5000 4 - 0.0040W - - 4 4 4 4 5000 9000 Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 0 === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 36 Celsius Available Spare: 100% Available Spare Threshold: 10% Percentage Used: 0% Data Units Read: 620,079 [317 GB] Data Units Written: 1,358,920 [695 GB] Host Read Commands: 3,438,919 Host Write Commands: 6,865,860 Controller Busy Time: 120 Power Cycles: 6 Power On Hours: 182 Unsafe Shutdowns: 0 Media and Data Integrity Errors: 0 Error Information Log Entries: 0 Warning Comp. Temperature Time: 0 Critical Comp. Temperature Time: 0 Error Information (NVMe Log 0x01, max 256 entries) No Errors Logged
  18. Mine is set to auto (which is the same as No AFAIK), but I'll change it to No as soon as I can stop the array. @Toobie, I'm new to unraid too, the first version I've ever installed is 6.7.0, and it has these corruption issues.... so no update was needed to cause it. [edit] I've set directIO to no now. [/edit]
  19. yeah, sitting here with 32GB too, but tmpfs for things like a plex transcode folder is a good use of the RAM, reduces cache writes too I made a 24GB max one
  20. Best bet is probably to go cacheless for now. yeah, read that tomshardware link I posted, when the drive crapped out on them it entered a readonly state... Sure sounds similar to your issue. I would find out what is needed for RMA, I think it just bit the dust. Also, just an fyi, some peoplemake ram drives for things like the plex transcode folder when their appdate is on a ssd array, just to help reduce writes. might be worth considering in the future.
  21. For me the best way to replicate a database corruption is to have plex dvr recording 3 or more things simultaneously.... problem is thats still not 100%. Seems more often than not surrounding dvr recordings, but that might be coincidence. @Squid - I had missed that limetech had addressed this some, that's good, thank you. @Spatial Disorder - I am running syncthing too, I haven't seen an issue with it, but I don't think its sqlite, is it?
  22. Sorry, I might not have been clear... I meant address it, like mention they are looking into it or something... I don't expect anyone to pull a fix out of thin air, but addressing the situation publicly should be done. Even if its addressing it and saying they don't think its real, at least communicate about it.
  23. that's not true. Some SSD's do and some don't. He has an Intel 600p which supports some of the tests. Obviously things like amount of time it's been spun up is silly on an ssd, but keeping track of lifetime stats like data written/read is important and useful. so going by your pics in previous posts, you are 2/3 through the lifetime written max the ssd is rated for, but over at Tomshardware they had the 600p crap out around 105TB (you are at 94.5TB in the pic)... so it might be that its getting too beat up. references - Intel Ark - https://ark.intel.com/content/www/us/en/ark/products/94921/intel-ssd-600p-series-256gb-m-2-80mm-pcie-3-0-x4-3d1-tlc.html Toms hardware - https://www.tomshardware.com/reviews/intel-ssd-600p-nvme-endurance-testing,4826.html Good you were trimming occasionally, it helps a bit with wear leveling and maintaining performance. Weekly or monthly is fine though with your usage, daily just adds extra stress to the drive, I'd probably put it back to weekly. Another thing that is hard on nvme is the temperature, alot of people put a heatsink on them since they are only a few dollars and can make things last longer. On the plus side it has a 5 year warranty and it hasn't been sold that long, so worst case you gotta RMA the drive. If I were in your shoes (I might be some day since I have two 660p's in mine as cache), I'd probably try to run the intel toolbox on the drive since it has Intel's diagnostics. I've never run it on an nvme, so I'm unsure of the results, but it seems prudent. The only annoying thing is they only release it for Windows... i have no windows machines, you might not either... makes it irritating, but not impossible. https://www.intel.com/content/www/us/en/support/articles/000005800/memory-and-storage.html
  24. while plex is the most prevalent, people are seeing with with most apps that use sqlite. I'm still on the trial and have little desire to spend $130 on a piece of software that doesn't work for my purpose. I figured unraid would be easy and allow some extra power savings by spinning down drives, but I've been a linux system admin for 15 years, I'm strongly thinking of dumping it for a normal distro or going back to omv on the box. It's also highly irritating that this seems to be fairly wide spread, but not being addressed by limetech.
  25. I have too. I started out with unraid 6.7.0, i've only been using it a couple of weeks, still on the trial.... not impressed and it was a new install, new database, etc... first got a db corruption to the point plex wouldn't start within 3 days of the install. I came from using OMV where Plex ran for the last 18 months and I'd never seen a corrupt database.