Jump to content

Abzstrak

Members
  • Content Count

    64
  • Joined

  • Last visited

Community Reputation

7 Neutral

About Abzstrak

  1. turn off next cloud for a while, spin down the disks, see what happens. Continue turning things off until something has an affect. Once you figure out what program or docker or whatever it is then you can troubleshoot it... right now your just guessing.
  2. I would, yes. Do your worst, abuse the system and see how big the transcode folder gets, then add 10-15% and go back to using a RAM drive that is at least that large assuming you have enough RAM. don't about the caching, its normal... unused memory is wasted memory.... it's a unix thing. No worries, the system gives it up for other use.
  3. true, but the automatically created one uses mount defaults, including a max size of 1/2 your RAM... which, could be very important. For example, I found that if I am DVRing and watching something I could pull down 22GB of space and I have 32GB of RAM. If I used defaults, then I'd run out of space and my DVRs would get auto cancelled. I mounted manually with a max size of 24GB to avoid this.
  4. yeah, just been lazy... was thinking of doing that as well as shooting out an email/sms message to my phone letting me know its corrupt. its been 10 or 11 days since my last corruption, so... motivation isn't all there either
  5. yeah, I've been keeping hourly backups of my database, makes it easier to restore to where I need to. just script copying it hourly to a different folder, works fine. I just have been running this command hourly (using user scripts addon). Obviously you'll need to create the dbbackups folder. tar zcvf /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/dbbackups/com.plexapp.plugins.library.db-$(date +%A-%H%M).tar.gz /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db
  6. you can use the storage while parity is being created, obviously sans any redundancy until complete. Be aware that you'll probably top out around 50MB/s unless you enable turbo mode, even then you'll have a hard time getting twice that speed on average. I copied over 12TB and it took 3 days over gigabit... keep in mind I never saw gigabit speed except in small bursts. I also didn't know to enable turbo mode until part way through. Cache won't help with this transfer, unless you have 8TB+ in cache Getting used to the way things work will take a bit of time too, I'd suggest playing with a spare machine to get an idea of how things work to speed up the transition.
  7. yeah, kinda not cool in that we cant test anything easily. I just grabbed the binary off my desktop and stuck it in my folder I have been testing db consistency with, seemed fine.
  8. you can use hdparm. I have a similar issue on my drives, I added the following line to /boot/config/go hdparm -W 1 /dev/sd[b,c,d,e,f,g] obviously change according to your needs.
  9. Can you add sqlite3? Apparently it's been removed from unraid 6.7.1 forward
  10. Why am I getting this now? sqlite3: command not found Sure makes checking for sqlite corruption hard guys....
  11. I just upgraded to 6.7.1 only because of this statement in the announcement: "We are also still trying to track down the source of SQLite Database corruption. It will also be very helpful for those affected by this issue to also upgrade to this release ."
  12. Regarding this addition: shfs: support FUSE use_ino option How do we enable this? Or is this default now? Also, I ran the upgrade, all ok so far. I noticed the enable direct io was set back to auto, I put mine back at "no" for now.
  13. Perhaps, but I don't think 1 week is sufficient to determine stability. I don't trust it yet
  14. megaraid will obfuscate it, you'll never get it in a normal way... storcli can probably pull data from there, not sure how to do that easily through unraid. Also, good luck getting anything from that owc box. using weird crap will get you weird results or = lots of work. best of luck.
  15. just use df, its the easiest I know of... not sure why you want to know though do a "df -i" it will show inodes free and used counts per mounted filesystems