srfnmnk

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by srfnmnk

  1. Happened to me today too. Restarted array / unraid. Docker came up and so did a few VMs but some failed. Tried to start manually, failed --> error, VNC no available port Looked in syslog --> Failed to add multicast for WSDD: Address already in use Disabled VM service Disabled Docker re-enable VM service -- worked re-enabled docker -- worked
  2. Hi, I'm having net stability issues with some of my docker containers, and after reviewing logs; this is what I found. arp (unraid ip) moved from br0 mac to shim-br0 mac arp (unraid ip) moved from shim-br0 mac to br0 mac This happens about every 10-20 minutes. Can someone please help me figure out what's going on here / how to fix it? Much appreciated. pumbaa-diagnostics-20220219-1014.zip
  3. Just closing this out. I have confirmed that the enclosure is unable to capture the device ID. The ORICO enclosures are not compatible with UD, they send through a single deviceID to UD I replaced the enclosure with a Sabrent USB 3.2 4-Bay 3.5 and it works great. It's keeping the drives down at 85F degrees whereas the other enclosure wasn't compatible with UD and the drive temperature were around 110F consistently. So I'd highly recommend this little Sabrent enclosure if you need one for UD.
  4. Gotcha, thanks for that. I took a look and did notice a subtle difference between the two disks showing up. Perhaps some logic can be added to UD to enable more devices? This is what shows up for the USB enclosure root@pumbaa:/dev/disk/by-id# ls | grep USB usb-External_USB3.0_0000007788FC-0:0@ usb-External_USB3.0_0000007788FC-0:0-part1@ usb-External_USB3.0_0000007788FC-0:1@ usb-External_USB3.0_0000007788FC-0:1-part1@ I'm not sure if there's any other way to identify the disks other than by-id but just thought I'd share what I found in case there's an opportunity here. For now, I suppose I will return this enclosure and try another one. Thanks again for the assistance.
  5. I see that the device is appearing multiple times but why is UD not capable of grabbing the disk metadata. When I click on the dev, you can see from my screenshots that the disk metadata is available. Not trying to be complicated just curious as to why if Unraid is able to see the Serial Numbers, UD cannot? Is there a way to fix this? Thanks
  6. Thanks, I recorded a little 2m video so you could see what's happening. I click on the dev to see the disk details and I am seeing the correct serial numbers. See images attached. Just trying to be 100% sure that the enclosure won't work before I go and exchange it because I had another version of this exact same brand and it's been working for years. One last note, I did have one of the disks shared in the previous enclosure and I do have a docker that references the cctv mount point...wondering if there's something cached somewhere that is messing up UD.
  7. How can I confirm? I can see the serial number and model numbers when I click on the disk link. lsblk also shows them as different devs and diff partitions and even when both are mounted the partitions are mounted at /dev/sbd1 and the other is /dev/sdx1.
  8. Hi there, I could not find the answer, sorry if duplicate. I am using Orico 5-Bay external disk enclosure and all disks in this extender are meant to be UD devices (not pooled devices). This is a USB3 device. It works fine as long as only a single bay is used with disk mounted. When I add the second drive, I cannot have a separate mount point and when I change partition name of any drive, all partitions on all drives within the enclosure receive the new partition name. It appeared that this was due to duplicate UUID, so I tried to change it via UD settings but I received Jul 13 06:10:37 pumbaa kernel: XFS (sdai1): Filesystem has duplicate UUID d97712e4-9e51-46b8-a133-f6ea96642d18 - can't mount Jul 13 06:10:37 pumbaa unassigned.devices: Mount of '/dev/sdai1' failed: 'mount: /mnt/disks/cctv: wrong fs type, bad option, bad superblock on /dev/sdai1, missing codepage or helper program, or other error. ' Jul 13 06:10:37 pumbaa unassigned.devices: Partition 'cctv' cannot be mounted. Jul 13 06:09:33 pumbaa unassigned.devices: Error: shell_exec(/usr/sbin/xfs_admin -U generate /dev/sdb1) took longer than 10s! Jul 13 06:09:33 pumbaa unassigned.devices: Changing disk '/dev/sdb' UUID. Result: command timed out I uninstalled and reinstalled UD. With any single drive, everything works ok but with a second drive everything gets goofy. Thoughts? Details Unraid 6.9.1 UD Version 2021.07.08
  9. No I'm using a Sabranet NVMe drive. I do have Samsung_Evos in unassigned devices, but not as cache. I am thinking my issue is the fact that my cache drive was encrypted and the upgrade didn't account for that... Glad to hear you're back up and running.
  10. I wound up having to do the same exact thing as @Aurial with one MAJOR difference. I can mount the disk using unassigned devices with no issue, I am in the process of moving all of my data to another disk so I can format and restore. I think the issue is that the drive was encrypted -- I didn't see any release / upgrade notes specific to handling encrypted drives. This seems like an oversight? Still working on restoring / setting up secondary pool with same encryption key to see if that works, will check back in UPDATE - Posting the log output of the array startup -- as you can see, it's claiming it cannot interact with the device and/or there's a corruption but unassigned devices can interact with it just fine. UPDATE 2 - I Reformatted my cache drive (btrfs encrypted) and restored everything -- the pool still cannot mount the encrypted cache drive. UPDATE 3 - I was finally able to get my NVMe (encrypted btrfs) working as cache for my pool but I did have to mount using unassigned devices, backup everything (had remote bkup but this was way faster), add it back to the pool as cache drive, start pool, Unmountable Filesystem still, click format in the bottom to format the cache drive (be sure it's only the cache drive there and no data disks), and then load all my data back to /mnt/cache. This worked. Back up and running and all my dockers are back along with everything else on 6.9.0 unraid_cache_drive_failure.txt
  11. That's awesome! Please let me know how it goes! I was planning to try it yesterday as well but ran out of time, it's on my todo list as well. Awesome! Well done.
  12. Hi @skois Thank you so much for your post on Face Recongition setup. Were you ever able to get it to work on external storage? I have mounted external storages as "Local". It looks like model 4 is the best right now, have you played around with the various models? Did you ever pass a gpu through to the container to let the GPU process the files? Thanks again! Edit 1: I found this -- I can't tell if the feature was merged or not but the PR was closed. https://github.com/matiasdelellis/facerecognition/issues/26 Edit 2: SOLUTION to EXTERNAL STORAGE https://github.com/matiasdelellis/facerecognition/wiki/Settings#hidden-settings occ config:app:set facerecognition handle_external_files --value true occ user:setting USER_ID facerecognition full_image_scan_done false Edit 3: DOCKER IMAGE POSTED WITH PDLIB installed I realized that the image got repulled each time the container was restarted and I didn't want to have to wait for the pdlib to install each time so I created my own docker hub image with the pdlib pre-installed. I plan to use nextcloud and to keep this image up-to-date but no promises. https://hub.docker.com/r/srfnmnk/nextcloud_face All you need to do to use it is to replace the repository / image location in your docker config. Edit next cloud, hit advanced view and edit the following two fields from linuxserver.io path to mine. The Dockerfile is exactly the same with the addition of the compiled pdlib -- use at your own risk obviously.
  13. Right, that's what I wanted to do but the organization of the files and data is nested and challenging to get to mount to the docker properly. As you said, it seems to be working. If that changes I will let you know. I have periodic backups of the databases so I could recover in the event of an issue.
  14. I am curious how the mover doesn't corrupt / break the WAL or other database files. Maybe mover doesn't move locked files? I would prefer to keep the db files on the cache in the /appconfig but figuring out how to get the nested mounts to work got a bit iffy.
  15. Yeah I think both "Yes" and "No" have merit depending on the circumstances. But prefer and Only are no-no I would think.
  16. Hi folks, I searched for high IO but couldn't find any references. below is an image of my cache drive IO with nextcloud docker image on. I added several "external storage" locations as "local" in nextcloud but other than that there's really nothing ON nextcloud. I do believe it's indexing photos metadata or something but I'm trying to find out why my cache drive is at 300-800 MB/s for nearly 24 hours after I install nextcloud. When I stop the nextcloud container the IO goes back to normal. I do also have my mysql database on this cache drive too which is really only being used by nextcloud ATM. Any ideas on why such high IO? Another thing to note, it's been writing at these speeds for nearly 24 hours, mover has not run, and yet, no major disk usage impact. The Used/free ratio really hasn't changed...any ideas here?
  17. I think you shouldn't use cache preferred -- you should use cache "yes" then mover will move it over and the df -h command will see the free space on your underlying drives, not the space on the cache. It's working for me with cache "yes". When I log into the docker image I also see the proper space for the mount as well.
  18. Ohhh yeah -- totally forgot about this plugin, thanks!
  19. Agree, thank you. will move deep storj conversation over there.
  20. Hi @Squid is there any way as of yet to specify only certain docker containers for auto-update? Thanks
  21. Thanks! I can confirm, when I started my second node, my ingress on 1st node slowed, I do not know if it slowed my vet time. Do you know if I can check number of passed audits? I just see 100% not the actual count. Why do you think 5 8TB nodes would be more efficient than 2 20TB nodes?
  22. Awesome info, thanks for sharing. Do you have a link to that storj forum? I think you're still suggesting that I run 2 nodes at once but just that I wait until the first one is fully vetted and operational to spin up the second one but still yet don't exceed a 40TB as there's likely no point, right?
  23. Thanks for the pointers -- it was a struggle -- primarily because I didn't want to use the unraid server root to generate and authorize identities. I did know that multiple storage nodes could limit ingress/egress but I DID NOT realize it would double vetting time. Thanks for that info. My thought was that assuming there is sufficient heterogeneous data, storj would still send me enough to populate two nodes. I'll shut down the second and do some testing once the first one fills up (which may never happen (24TB could take a while lol). Why is 24TB the max? Is that still a recommendation or is a single, larger node ok?
  24. Hey folks. After digging into this, the combination of authorizing your identity and getting the keys and signed certs in the proper location is quite challenging. I created a video to help clarify some of the confusing parts. I'm not sure if the video helps or makes it worse but if you're stuck, perhaps you can see the way I got it to work and get over the hump. YT VIDEO P.S. I realize you don't have to authorize from within the container but there are challenges authorizing outside of the container as well (i.e. unraid only has a root user (typically) and you probably don't want the identity binary creating hidden files in your /root/.local. Signing it on another ubuntu machine is probably the easiest but if you don't have that this is maybe the easiest way to go.
  25. First time start up needs to use - `-e SETUP=true` It will not start then either, but then you need to go back to the extra parameters and remove that again and start it back up. This is to provision the initial requirements.