BigFam

Members
  • Posts

    53
  • Joined

  • Last visited

Everything posted by BigFam

  1. Is it possible to have 2 or more mirrored cache pools with zfs? I have previously successfully created an SSD mirrored cache pool with zfs but when following the same process to create another cache pool with zfs mirror with a pair of HDDs I get the error as indicated earlier in this thread and cannot resolve it by following the instruction. thanks
  2. Happy to report I was able to set the disk password in UD, after which I was able to mount the disk. My question is why I had to enter a password in the first place? The disk had previously been formatted with encrypted XFS but I copied all content off the disk and reformatted it to regular XFS without encryption. I have since run and rebooted the server several times over 2 weeks with no issues so I was thrown off and fearing complete data loss from corruption when the disk said luks after last reboot...
  3. I have a HDD installed not part of the array or any pool. I have used it for some time mounted using UD and formatted the drive with XFS after pre-cleared it. I was looking to install another drive and did a clean shutdown, pulled the power cord, discharged any left charged capacitors by flipping the on switch while unplugged. After I connected the new drive and turned on the system it wouldn't boot so I went into the bios and the original drive was not visible. I repeated the procedure to unplug etc and checked all cable connections. I then booted the system. All drives show up in unraid but the original XFS formatted drive is listed as LUKS. I've used encryption before so I'm familiar with how it works with entering a password to start the array. With this current situation there is no option in the GUI to enter a password and I'm not sure why it shows as LUKS in the first place. HAs the drive become corrupted? How can I troubleshoot this? disregard footer as it pertains to my original unraid server, this issue is on my second server. Thanks, Carl node-diagnostics-20231222-2345.zip
  4. What does your container parameters settings look like? I understand you’re not using a discrete GPU but Intel CPU integrated hw transcoding capabilities, correct? Have you verified the settings to be Intel GPU 1. Edit your go file and add modprobe i915 to it, save and reboot. 2. Add the Plex container and add --device=/dev/dri to extra parameters (switch on advanced template view) There is no need to chmod/chown the /dev/dri, this is handled by the container. ? On a side note, I used to use linuxserver.io container but switched to the official Plex Media Container some years ago with an Nvidia P2000 GPU and never looked back. Skickat från min iPhone med Tapatalk
  5. I did stop the pre-clear session, updated to 2022-03-05 version and resumed the session. The log view still shows integer error but on a different line in the script. Just for information as I'm very happy with your information that it is for notification only and doesn't affect the actual pre-clear of the disk. Thanks for your work, I just made a small donation for your retirement🙂. Mar 06 19:43:31 preclear_disk_71G0A46WFQDH_25290: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1514: [: : integer expression expected These are the release notes from the installed version: unassigned.devices.preclear Fix: Integer error in prelcear script. 2022.03.04 Add: Remove libevent (it's included in Unraid) and update utempter package. Fix: Right justify the preclear results on the status page. 2022.02.27 Initial release.
  6. Great, thanks for the re/assuring and clear information and also for the tip about stopping, updating and resuming the pre/clear session!
  7. I initiated a pre-clear before the fixed version on March 5 which is still running. Using the old version progress preview looks fine/normal but log file view shows the error every second already mentioned in this thread. Mar 06 15:10:15 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected If/when the pre-clear completes using the old version can I trust the disk if it completes successfully? Pre-clear progress preview: ######################################################################################## Unraid Server Preclear of disk 71G0A46WFQDH # # Cycle 1 of 1, partition start on sector 64. # # # # Step 1 of 5 - Pre-read verification: [22:50:17 @ 218 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [22:55:56 @ 218 MB/s] SUCCESS # # Step 3 of 5 - Writing Unraid's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying Unraid's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read in progress: (33% Done) # # # # ** Time elapsed: 6:03:55 | Current speed: 267 MB/s | Average speed: 276 MB/s # # # ########################################################################################
  8. You're right. My plugin is not up to date as I initiated the pre-clear before March 5th which seems to be the date for the fixed version. I wonder if the log entries are 'cosmetic' and I can trust all stages having completed successfully though (when/if they do complete - pre-clear is still running). I'll post in the original thread you linked. Thank you!
  9. Thanks Squid. Looks like this is a know issue with the pre-clear beta, is that correct? When and if the pre-clear completes and no errors are shown in the progress. Does it mean I can trust the harddrive or should I install one of the previous versions of pre-clear and run it again until the new version is ok?
  10. I got myself a Toshiba MG09 18TB Enterprise 7200rpm SATA drive and performing a pre-clear. It's been running for close to 70 hours whereas my 5400 rpm 12TB shucked white-labled WD Red usually complete in less than 48 hours. Progress preview in pre-clear tool shows normal values of 218 MB/s average speed for pre-read verification and zeroing steps (still running as of typing) but the Pre-clear Disk Log shows warnings/errors. What gives? Should I worry? Pre-clear progress preview: ######################################################################################## Unraid Server Preclear of disk 71G0A46WFQDH # # Cycle 1 of 1, partition start on sector 64. # # # # Step 1 of 5 - Pre-read verification: [22:50:17 @ 218 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [22:55:56 @ 218 MB/s] SUCCESS # # Step 3 of 5 - Writing Unraid's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying Unraid's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read in progress: (33% Done) # # # # ** Time elapsed: 6:03:55 | Current speed: 267 MB/s | Average speed: 276 MB/s # # # ######################################################################################## Pre-clear Disk Log (new entries pop up continuously about 1 second apart): Mar 06 15:10:06 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:07 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:08 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:09 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:10 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:11 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:12 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:13 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:14 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:14 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected Mar 06 15:10:15 preclear_disk_71G0A46WFQDH_16775: /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 1510: [: : integer expression expected
  11. Thanks a lot. I have used mc before but forgot about it. This was exactly what I needed. Thanks!
  12. I guess that could work but I'm not in a position to mount the drives in my main unraid server for some time and I'm not Linux savy enough install Linux mount them encrypted drives and copy over neetwork. If all other options fail I might bite the bullet and learn...
  13. My bad. It is indeed possible to mount the disks without starting the array. I can browse the mounted disk from unraid gui but how do I perform file operations? I'm used to using Krusader but I can't start the array to start the Krusader docker app as it will make moving the full contents of appdata...
  14. Is there a way to manage files on disks without starting the array? I have encrypted disks from a previous unraid installation. I want to start over. I know that I can't mount the disks using unassigned devices but that requries the array to be started and to have at least one of the disks assigned to the array which makes it harder to do file operations. I basically want to move files off of 3 encrypted disks by moving all files from at least one of the disks to the other 2 and then start the vanilla array with that empty disk assigned, then either mount the 2 remaining disks using unassigned devices or add them to slots in the array to continue re-arranging the shares structure. Thanks. (This is another unraid server than in my signature)
  15. I was actually going to move the system share to a pool in an attempt to let all drives in the array spin down but hadn't got there just yet... My thinkning is to move the system share, appdata and syslog shares to a pool with either a single or mirrored SSDs and back them up with the CA Backup / Restore Appdata plugin. Does that sound like a good idea? (atm the signature is meant to reflect that I have the shares on my array with caching set to preferred to the cache array but that has not yielded the result I'm after - still learning)
  16. Thanks a lot for your tip! Changing the spin down time to 30 min, hitting Apply and changing it back to 15 min and hit Apply again seems to have worked like a charm. All disks except the parity disks and disk 1 (with the system share) have spun down.
  17. I have an 11 disk array with all Whitelabled 12TB WD Red with a 2 disk parity. I used to have only 1 parity drive and the array disks were spinning down according to 15 min policy set in the disk settings. After adding a 2nd parity drive all drives in the array are constantly spinning. I can't even spin them down manually. Is this expected? If not How do I troubleshoot this? Thanks, Carl
  18. I really don't think it's the case but check of any of your friends with whom you shared libraries with changed their Plex alias.
  19. You should immediately log into you account at Plex.tv. From there unshare any libraries and remove all friends and family members select to logout from all devices. Change your password and disable and re-enable 2 factor authentication. Then optionally re-create family members and re-share libraries with friends and family.
  20. I realize "Coming Soon" is inherently flexible but is there a timeline? Will the plugin for 6.9.x be updated or will it only be natively implemented in 6.10.x?
  21. Thanks for stepping in and correcting my false claims. Sorry for any confusion. I learned a lot from my mistake...
  22. When you moved folders using MC. Did you move share to share or file folder to file folder? You should NEVER move from share to file folder or vice versa. It's asking for trouble. Skickat från min iPhone med Tapatalk
  23. Also, are you sure only Chromecast broke? Or is it all transcode playback that broke? The latter would indicate misconfiguration of the transcode folder as if Plex is unable to write to the location. Skickat från min iPhone med Tapatalk
  24. There seem to be a few things that are misconfigured if my understanding is correct (or could be what terms you are using). First, a "pool" is always cache, ie it will belong to the array (although you can configure the share to "prefer" the cache). To have Plex configuration and persistent storage installed on a dedicated SSD, I 4hink you need to format it outside the array and mount the disk using unassigned devices plugin. What is the reason for you configuration?There are numerous recommendations on how to configure Plex on unraid. Skickat från min iPhone med Tapatalk
  25. My thinking was to separate my actual issue with a general question about expected behavior. Still learning the forum culture... In the end I took a risk and stopped everything and manually moved the files using MC. It was probably not a decision you should give as advise but I could not get Mover to move the files. Reformatted the cache pools using encrypted btfrs. Everything up and running and shares re-configured/reverted to use the cache. Only negative site effect so far is I had to reconfigure the Roon Server container. Thanks for you advise @ChatNoir, I really like the excellent support in the unRAID forums.