Can0n

Members
  • Posts

    615
  • Joined

  • Last visited

Everything posted by Can0n

  1. No, I'm sorry to say my Plex is still completely unstable after going to 6.12.3 and adding the extra parameters I had to restart it three times in a 20 minute span again I tried Docker directory and my reverse proxy failed to work. I tried an XFS docker image file and my reverse proxy failed to work. Went back to the BTRFS Docker image and everything fires up again, but I am afraid it's probably not going to be very stable again. I've already got a post on the bug reports section for 6.12.3 with my diagnostics and the Plex Docker log is useless. It doesn't say anything is wrong
  2. Tried docker directory and could not get my reverse proxy working at all so tried XFS vdisk and that too did not work for my reverse proxy had to go back to the btrfs vdisk image
  3. to add to my OP restarted it twice while remote at a friends house and noticed in the 10min commute back home it was down again. thats when i grabbed diagnostics and the plex logs via docker. all my libraries are unavailable and only the free plex services show like watch list and discovery stopped array and running smart test on my 1 month old SSD however this is a unraid/zfs issue not my hardware.... prior to some btrfs issues in was having before i replaced my SSD's i hardly ever needed to restart plex now its 3-6 times per day to get it up and running...im thinking of going back to btrfs could it be my docker.img is a btrfs vdisk on ZFS? should i wipe it and create a zfs one and re-install all my containers?
  4. despite putting --no-healthchecks being added to the containers advanced settings I am still having to restart Plex so many times a day to get it working for 5 or 10 min the cache pool is on ZFS and these issues have started since converting to ZFS on 6.12.2 and persist in 6.12.3 whats going on please help. this is driving me crazy as I am out and about and have to remote in to restart it when family and friends tell me it is not working again docker logs for plex are useless and say nothing is wrong here is what is coming through while its down uptime kuma and paid service uptime robot do not detect out on port 32400 through its monitoring service [cont-init.d] executing container initialization scripts... [cont-init.d] 40-plex-first-run: executing... [cont-init.d] 40-plex-first-run: exited 0. [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: executing... [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: exited 0. [cont-init.d] 50-plex-update: executing... [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting Plex Media Server. [services.d] done. Dolby, Dolby Digital, Dolby Digital Plus, Dolby TrueHD and the double D symbol are trademarks of Dolby Laboratories. Critical: libusb_init failed diagnostics_thor.zip
  5. No problem I added the --no-healthcheck andnits pretty stable so far *fingers crossed* For the mover yes I had a thread for 6.12.2 with diagnostics about mover being extremely slow from zfs->XFS and xfs->ZFS but write construct might fixed the speed issues. As for datasets etc and mover no worries when I get some time I'll runs some mover tests and post a new thread
  6. I see a new acknowledgment release note have been read That's a great step. I've been seeing nothing but questions about Docker on reddit and when we see the specs I saw the issues listed in the release notes and how to fix them. People need to read more lol
  7. I sure hope this updates the stability of some of my containers on ZFS Been having a lot of problems especially with Plex (using official container) and I have a lot of friends and family that use it. It's been going down several times a day where we started the container is necessary Mover has a lot of problems on 6.12.2 as well, especially when trying to move shares from a ZFS pool to the array using xfs it will never finish completely because it can't delete the data sets from ZFS
  8. Mover still incredibly slow moving any thing even large linux isos to ZFS from XFS or XFS to ZFS ZFS is using datasets so i think its hanging on the delete portion of the mover script as it not checking for datasets and doing a recursive zfs destroy command
  9. unfortunately unbalance doesnt support datasets just shares so that could explain why i could not see the temp location i had moved my appdata to
  10. Hello @jbrodriguez love the unbalance plugin however I am seeing one issue one of my cache pools is now showing up I have 2 party, 22 array disks, 1 ZFS pool with two 1TB SSD's and the pool not showing is a second ZFS pool with two 4TB spinners in ZFS it is also reading the size of the ZFS SSD's incorrectly as 690GB in size the pool is called download-cache so not sure if there is a character limit for it to display or what the issues is and wondering if you could take a look
  11. thank you for this I was pulling a some drives from a zpool to redo them from a 4 drive config to a 2 drive config and started and stopped too soon and docker.img held up the stop process as soon as i saw your command ran it and everything unmounted clean Still relevant 3 years later ! thank you!
  12. So got Plex in appdata working now used mover and rsync -avh and then a quick move from /mnt/cache/apps2/PlexMediaServer to /mnt/cache/appdata/PlexMediaServer after 15 hours of mover started using rsync for all the tiny meta data files and found a vast majority of the 214GB was old PhotoCache i removed all that and it made Plex about 64GB
  13. Well it was the most painful move ever 15 hours of mover trying to move 214GB plex folder. I ultimately using rsync with -avh to move the folders with smaller and more files and let mover handle the database files took nearly 24 hours but I got Plex up and running on a ZFS pool now with no watched status lost for myself ore my users! I took a snapshot right away
  14. I do have that but it doesnt see the download-cache only the cache and my disks...odd
  15. mover seems to have gotten stuck it says its still moving but no dat is being moved in meantime only things not moved for Plex is the Cache/Phototranscoder and gigs and gigs in it I might try moving the contents outside of plex and that way mover running plex back to cache pool might be quite a bit faster
  16. my Plex Folder alone is 214GB i have 31 containers installed and run about 20 of the full time
  17. yeah the -v is just verbose so you can see what it is spitting out. wish I had enough ram for that I have 64GB max for my i7 10700k i am running a plex transcoder scratch disk write reconstruction seems to be helping
  18. yeah 400GB 4 hours isnt bad Plex has millions of little files but moving from ZFS to XFS is dog slow since its just plex now and mover i moved all the other docker appdata using rsync -avh /mnt/download-cache/apps /mnt/cache/appdata and it was from ZFS pool to ZFS pool at about 45min plex went as well but when i started the container up it was trying to set me up as a new server. when i mapped the contain back to /mnt/download-cache/apps/PlexMediaServer it fired right up so rsync is breaking something in plex i enabled reconstruction and monitoring thanks for the tip I thought i had that on a long time ago
  19. its beautiful I would love to add that now! i have been trying to move a 214GB plex appdata folder from my ZFS 4 drive zpool to the array trying to get it on my newly formatted SSD in ZFS) for over 13 hours its moving sooo slowly i need to see the status I have dozens of messages coming in why my plex is down...it litterally not showing in the logs that its moving I see trickles of data on Disk1 and the ZFS cache-pool here and there but it is taking forever....the initial appdata move of over 500GB from BTRF to array took 4 hours
  20. So here is the story. My current server (6.12.2) has two 10TB parity disks and 22x6,8 and 10TB XFS disks for the array I used two 1TB SSD in BTRFS RAID1 and two 4TB Spinners as a secondary pool called download-cache using BTRFS I also have 2 USB3 4TB drives attached with unassigned devices I wanted to convert both pools to ZFS so here is what i did. set mover for the download-cache to move all the data to the array stopped all applications using the download-cache so no files were locked. ran mover when it was done and set the share (Media and a couple other shares) to array and primary no no secondary storage. Once the download-cache was empty i let the apps use it again as the data would only go to the array. I stopped the array and removed the 2x4TB disks and started the array without them then stopped it again and assigned them setting the file system to ZFS then formatted them adding the 2x USB3 4TB drives as well for a raidz 1 group of 4 disks. once done my next step was appdata, domains and system shares (current on BTRFS) i shutdown vm manager and docker and set the shares to move to the array. once done I did same process to remove cache, re-add and format as ZFS Mirror 1 group of 2 devices. i decided to rsync it back to the cache from array as Mover took over 4 hours rsync took about 1.5 hours. most of my dockers failed to work correctly and i saw perms completely borked. i ran mover to move them to the download-cache to test them and they all worked albeit slower being on spinner drives. now i used a different rsync option to keep time, owner/groups and permissions rsync -avz /mnt/download-cache/apps /mnt/cache/appdata that seemed to copy ok and all my dockers except Plex worked. Plex is acting like its a new server despite database and all files seeming to be there. I updated the config to point back to download-cache where the folder apps still had all the data Plex is working albeit slower so i decided to run mover on it last night the only thing moving is the Plex folder and its been 12 hours and still moving Attached diagnostics but would like to know why BTRFS over 400GB took 4 hours to move and ZFS to XFS 214GB is taking more than 12 hours some takeaways: Would be nice to have a progress bar on Mover to see its status and potentially pause it if needing some bandwidth on the drives. if a progress bar is not easily able to be added perhaps an estimated time to completion based on its current file list thor-diagnostics-20230706-1155.zip
  21. found it under /boot/config/plugins/dynamix/thor2freya.cron renamed it thor2freya.cron.old I have other crons I created there I am going to remove I created that almost 5 years ago when i had 3 unraid servers. the Freya ultimately died and i recycled it the 3rd one has become a windows Server 2022 Printer Server as the case drive bays was killing drives at an alarming rate. now its just a single 256GB SSD laying inside the case
  22. found it!! thank you for your help root@Thor:~# nano /etc/cron.d/root grep -r "Backup Share to Freya" /boot/ /boot/config/plugins/dynamix/thor2freya.cron:#*** rsync Thor's Backup Share to Freya
  23. so looking up mover, now running it and its been over 7 hours trying to move to the array appears mover is just using rsync anyways so wondering what the full rsync string is in mover so I can try and do a copy not move using the same options as mover so I can keep a working copy of plex on the array should the move to the cache fail again for whatever reason is it by chance this script here? https://gist.github.com/fabioyamate/4087999
  24. i used rsync becasue mover wont move files off of zfs the rsync moved all files from apps2 on the download-cache to appdata on the cache it retained the timestamps, permissions and users/groups from the download cache my plex config folder is approx 214GB so rather large no way to move it from one cache to other and it took over 4 hours to for mover to move from cache to array and array to download-cache once i tried to move it back to array it completely failed mover just stops for no reason so was stuck using rsync which actually took about 45min to copy all the containers from appdata that i put to "apps" I verified database files are there and all the files that should be there are... when i load from a URL i configured to get me there using internet DNS i get the interface and all my libraries have a triangle when i map the plex contain back to the apps folder on download-cache it loads my server and libraries just fine
  25. So just noticed it showed up again with no stop n start of the array here is a screnshot of the the /etc/cron.d/root file and maybe i using grep wrong but no results using the cron schedule with rsync command and its options (Attached too)