bombz

Members
  • Posts

    613
  • Joined

  • Last visited

Everything posted by bombz

  1. yah for sure. Apologies I didn't grab them For the time being, I am going to allow the system to sit in the preset state Deployed: Pool1 = (2X) btrfs - 2.5 SSD Not Deployed: Pool2 = (2X) zfs - NVMe <Currently the (2X) NVMe disks are detected as UD devices since the restore, which make sense.> I was low on time yesterday & preferred to get things back into an operational state once I saw this concern occur. Moving forward I will attempt to recreate the operation; re-add the (2X) NVMe as a ZFS pool and toss a test docker container on them to see if the concern happens again. At the time of all the errors and all the shares disappearing, I was possibly thinking it was permissions? available PCIe lanes? even possibly the hardware itself (new expansion card w/ new disk)? When I have some more 'down time' to work and test... I will be sure to grab diags. and post them....if the concern occurs again, Appreciate the follow-up.
  2. Hello, Appreciate the response and good to know. I have been watching the logs for time stamps of spin downs, force spin downs and then spin ups for SMART. I don't see an enabled/disable plugin at this moment in time so I may uninstall to investigate further. All disk are attached to an LSI HBA currently.
  3. Hello, Been testing the plugin, works well. I am currently testing if this plugin is waking disks for SMART reporting, as I have seen disks spin up more often than usual. May uninstall it for a day or two to see if it resolves the spin ups. The most recent update shows disk names which is a nice added touch. Let me know your feedback on the spin-ups. Thanks.
  4. Hello, Currently I have (1X) cache pool 1 which is used for all appdata/dockers I have added a new ZFS mirror cache pool. This is to be used for one specific docker, I was able to initialize the zfs cache pool 2 and attempt to install a test docker <qbitorrent> upon installing the docker the docker log for this container stated 'could not create /appdata/qbitorrent/.cache/' I attempted to restart the docker and the docker failed to initialize I then went to my shares on unraid and all the shares had disappeared. I shutdown the server and restored my USB from backup and rebooted to get back to the state before adding the ZFS cache pool. I am now back to the restored state before adding the ZFS cache pool. Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.687420, 0] ../../source3/smbd/smb2_service.c:772(make_connection_snum) Nov 20 18:28:40 UnRAID smbd[3965]: make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/... Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.688541, 0] ../../source3/smbd/smb2_service.c:772(make_connection_snum) Nov 20 18:28:40 UnRAID smbd[3965]: make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/... Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.689571, 0] ../../source3/smbd/smb2_service.c:772(make_connection_snum) Nov 20 18:28:40 UnRAID smbd[3965]: make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/... I am curious to what may have went wrong? Can you not use another cache pool for other docker containers? Does each cache pool require to have it's on docker.img file on the pool? Any assistance would be great. Thanks
  5. Hello, Reaching out to receive some possible feedback regarding (2X) NVMe disk for a ZFS cache pool; mirrored. I have been seeing a lot of mixed reviews on disk based around performance, use-case, warranty issues <samsung>, etc. I have been attempting to find a brand of NVMe disk that would be recommended for use with Unraid and thought the community would be the best place to go for real-world feedback. Been having a hard time deciding to pull the trigger. As it has been noted mostly that these disk run hot depending on what they are used for, depending on disk make & model; have seen posts pertaining to corrupted blocks, I/O errors, etc. I have been looking up the following NVMe disk so far: *keep in mind 500GB -to- 1TB would be more than sufficient* SAMSUNG 970 EVO PLUS M.2 2280 1TB PCIe Gen 3.0 x4, NVMe 1.3 V-NAND Internal Solid State Drive (SSD) MZ-V7S1T0B/AM Crucial P2 500GB 3D NAND NVMe PCIe M.2 SSD Up to 2400MB/s - CT500P2SSD8 Crucial P3 Plus 500GB PCIe Gen4 3D NAND NVMe M.2 SSD, up to 5000MB/s - CT500P3PSSD8 Western Digital 500GB WD Red SN700 NVMe Internal Solid State Drive SSD for NAS Devices - Gen3 PCIe, M.2 2280, Up to 3,430 MB/s - WDS500G1R0C ADATA Legend 840 PCIe Gen4 x4 NVMe 1.4 M.2 Internal Gaming SSD Up to 5,000 MB/s (1TB) > Any other NVMe disk recommendations that perhaps I have looked over are welcomed < > NVMe heatsinks seem to be highly recommended. > Does Gen3 vs. Gen4 make a large difference? Does anyone have some useful feedback for this specific use-case. Looking forward to receiving some feedback. Thank you kindly.
  6. Hello, Much appreciated, and good idea. Thank you.
  7. Hello, This is an interesting idea. I do currently use CA backup for all my current dockers, and you make a good point as the backups would get quite large if including the metadata within the plex docker folder. As you stated; you are currently running 'Plex\Library\<matadata>' appdata folder folder on a separate SSD or 'pool' -- out of curiosity, is it simple to point the PLEX docker to another SSD to use that location as the metadata, perhaps a variable that is required to be added upon deploying the PLEX docker? Can you install PLEX docker to the 'default appdata location' -- while pointing the 'metadata' DIR at another SSD disk/pool? or Are you simply pointing the whole PLEX docker instance 'appdata + metadata' at a whole different SSD disk/pool in a btrfs or zfs mirror? I am really curious to this setup because it is an excellent point if/when required to restore the docker the metadata is on a completely different disk. Please feel free to include screens or PM me as I don't want this to hijack the thread out of respect. Look forward to hearing back. Thank you.
  8. Hello, Was reviewing this post as I am soon to be migrating back to PLEX DOCKER from another OS. Are there some 'best practices' other than this transcode recommendation regarding the PLEX DOCKER deployment that are recommended? I will keep skimming this thread and attempt to collect information. Look forward to hearing back. Thanks.
  9. Came across this thread and this is eventually exactly what I would want to implement. Going to continue research to understand how to deploy this scheduled task. Thank you.
  10. Hello, Could you provide some further insight regarding this? Would love to try to get this deployed and operational.
  11. +1 on this Would love to see this in CA again
  12. Hello, Appreciate all the posts from everyone here, based on the errors attached: Oct 21 03:08:01 UnRAID emhttpd: read SMART /dev/sdf Oct 21 03:09:47 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 03:09:47 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 03:09:47 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 03:09:47 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 03:16:14 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 03:16:14 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 03:16:14 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 03:16:14 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 03:19:37 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 03:19:37 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 03:19:37 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 03:19:37 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 03:49:51 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 03:49:51 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 03:49:51 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 03:49:51 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 04:03:13 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 04:03:13 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 04:03:13 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 04:03:13 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 04:07:04 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 04:07:04 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 04:07:04 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 04:07:04 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 04:10:54 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 04:10:54 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 04:10:54 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Oct 21 04:10:54 UnRAID kernel: pcieport 0000:00:01.0: [ 6] BadTLP Oct 21 04:19:07 UnRAID emhttpd: read SMART /dev/sde Oct 21 04:19:07 UnRAID emhttpd: read SMART /dev/sdo Oct 21 05:04:47 UnRAID kernel: pcieport 0000:00:01.0: AER: Multiple Corrected error received: 0000:00:01.0 Oct 21 05:04:47 UnRAID kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) Oct 21 05:04:47 UnRAID kernel: pcieport 0000:00:01.0: device [8086:6f02] error status/mask=00000040/00002000 Added the following: Select > Main Select > Flash Scroll down > Syslinux Configuration Unraid OS kernel /bzimage append pcie_aspm=off, initrd=/bzroot pci=noaer Also referenced this post regarding pci=noaer Since reboot, there is no further errors. Appreciate the assist here everyone! Thank you!
  13. Thanks buddy. That was an oversight for sure. I'll mark topic as resolved. Cheers, appreciate the assist here!
  14. Hello, I was adjusting my shares today and noticed that my disk 10 is no longer seen in the included or excluded disks I can access the disk directly via explorer or SSH The disk is online in the GUI, not sure why unraid 'shares' no longer sees it I have shut down and rebooted the server with no change to this concer. Any ideas? Thanks.
  15. Select > Docker Section in top banner in UnRaid Select > the docker you are having the concern with Select > Edit (from the docker menu) In the 'Update Container' section: Select > Advanced View (top right of the page) Scroll down to > Path: /data: * Select > Edit In the 'Edit Configuration' window: Select > Access Mode Select > Read/Write - Slave Select > Save Select > Apply Once processing has finished go to Settings (in the top banner) Select > Fix Common Problems Select > Rescan The previous error should resolve itself. Hope that helps.
  16. Hello, THANK YOU for this post! umount /var/lib/docker Worked perfectly to unmount disk shares on 6.12.2 Was rebooting to update to 6.12.4 and this unmount looping issue occurred. Many thanks!
  17. Hello, Would that be in a .cfg file somewhere on the flash drive? I am trying to understand how he got it to the state it is in. Could he have done something in docker, and revproxy that made this act up in such a way? Look forward to hearing back. Thank you for your assistance with this! EDIT: Found in Settings->Management Access Thanks
  18. Hello, Attempted with https://IP:2443 Was able to successfully connect (invalid SSL cert). Trying to attempt to understand how to set things back to normal for this instance, so the port is no longer required? Any suggestions? Thanks.
  19. Appreciate this post. I picked up the following card recently: LSI SAS 9300-16I 12GB/S HBA BUS ADAPTER CARD IT Mode 4*SFF-8643 SATA Cable I was curious of the best way to check over the card before putting into production in UnRaid, as well confirm IT MODE is actually flashed and working correctly. Is there a step by step guide to reference before using the card? Thank you.
  20. Hello, Stumbled across your post regarding this setup, and if you were able to figure it out successfully? My setup differs a bit as I have (2X) gateways with (2X) different WAN connections. Plan: Looking to understand how to setup the UnRaid server so the following takes place: (1X) onboard NIC This NIC has a gateway on the network as well as a WAN1 connection. Would like to to use this NIC for internal communications only. I DO NOT want to use this WAN1 connection as a primary connection. Would rather keep this as a backup, fail-over WAN1 connection (1X) PCI NIC This NIC has a gateway on the network as well as a WAN2 connection. Would like to use this PCI NIC for external communications only. Is there a way to configure UnRaid to always use this PCI NIC for the WAN2 only? I most commonly know within the Windows OS you can change the NIC priority "interface matrix" order of the NICs. I am not sure if I can set this up in the this method and was seeking some clarity. Thank you.
  21. Hello, Quick question regarding adding an additional NIC into my current UnRaid server, will attempt to give as much detail as possible. Current Setup: (1X) onboard NIC connected to the network (WAN1) New Setup: (1X) onboard NIC (WAN1) <existing nic> (1X) PCI NIC (WAN2) <new nic> Plan: Looking to understand how to setup the UnRaid server so the following takes place: (1X) onboard NIC This NIC has a gateway on the network as well as a WAN1 connection. Would like to to use this NIC for internal communications only. I DO NOT want to use this WAN1 connection as a primary connection. Would rather keep this as a backup, fail-over WAN1 connection (1X) PCI NIC This NIC has a gateway on the network as well as a WAN2 connection. Would like to use this PCI NIC for external communications only. Is there a way to tell UnRaid to always use this PCI NIC for the WAN2 only? I most commonly know within the Windows OS you can change the NIC priority "interface matrix" order of the NICs. Is there a method that this can be set within the UnRaid OS? My hopes is I worded this correctly for a broad understanding. I am also open to suggestions from the community to assist with this plan. Appreciate your time. Thank you!
  22. Hello, Appreciate the follow-up. I did not try those ports via the webGUI IP:PORT I will do my best to attempt these suggestions, since I am remote from the system assisting a friend, I will have them setup my remote connection back to them and see if these suggestions allow me to access the webGUI. If they do, would I be required to suggest for them to always user IP:PORT to access the webGUI moving forward, or where do you think the problem resides after investigation? Thank you.
  23. Hello, Working on a friends system remotely who said they can no longer connect to the GUI , which i confirmed. I was able to connect via SSH with the IP address, and pull diagnostics ifconfig /boot I also performed from SSH nano /boot/config/docker.cfg and change DOCKER_ENABLED="yes" to be DOCKER_ENABLED="no" To attempt to see if it was a docker port, but that didn't seem to resolve the concern Have still been unsuccessful ... trying to figure out why webGUI is not triggering. I was curious if someone could review the attached to point me in the correct direction, if the diags show anything? Thank you in advance! tower-diagnostics-20230922-1922.zip
  24. Hello, Ah ok -- the old ones in 6.12.* do not say deprecated on my instance. I will install 'Appdata Backup' - KluthR's Repository plugin: installing: appdata.backup.plg Executing hook script: pre_plugin_checks plugin: downloading: appdata.backup.plg ... done Executing hook script: pre_plugin_checks Checking some pre-install things... Plugin files were not present. Fresh install plugin: downloading: appdata.backup-2023.06.23.tgz ... done Extracting plugin files... Checking cron. Checking cron succeeded! ---------------------------------------------------- appdata.backup has been installed. (previously known as ca.backup2) 2022-2023, Robin Kluth Version: 2023.06.23 ---------------------------------------------------- plugin: appdata.backup.plg installed Executing hook script: post_plugin_checks Thank you for clarifying the process. All looks well. Will monitor the backups moving forward and see if there are any concerns. Thanks for all your hard work, new GUI looks great!
  25. Right. So what am I missing? I attempt to install 2.5 (update) which states it's already installed. Regardless what I select, it states it's already installed? What am I missing to perform this installation and migration?