srfnmnk

Members
  • Posts

    118
  • Joined

  • Last visited

Converted

  • Personal Text
    test

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

srfnmnk's Achievements

Apprentice

Apprentice (3/14)

6

Reputation

  1. Just closing this out. I have confirmed that the enclosure is unable to capture the device ID. The ORICO enclosures are not compatible with UD, they send through a single deviceID to UD I replaced the enclosure with a Sabrent USB 3.2 4-Bay 3.5 and it works great. It's keeping the drives down at 85F degrees whereas the other enclosure wasn't compatible with UD and the drive temperature were around 110F consistently. So I'd highly recommend this little Sabrent enclosure if you need one for UD.
  2. Gotcha, thanks for that. I took a look and did notice a subtle difference between the two disks showing up. Perhaps some logic can be added to UD to enable more devices? This is what shows up for the USB enclosure root@pumbaa:/dev/disk/by-id# ls | grep USB usb-External_USB3.0_0000007788FC-0:0@ usb-External_USB3.0_0000007788FC-0:0-part1@ usb-External_USB3.0_0000007788FC-0:1@ usb-External_USB3.0_0000007788FC-0:1-part1@ I'm not sure if there's any other way to identify the disks other than by-id but just thought I'd share what I found in case there's an opportunity here. For now, I suppose I will return this enclosure and try another one. Thanks again for the assistance.
  3. I see that the device is appearing multiple times but why is UD not capable of grabbing the disk metadata. When I click on the dev, you can see from my screenshots that the disk metadata is available. Not trying to be complicated just curious as to why if Unraid is able to see the Serial Numbers, UD cannot? Is there a way to fix this? Thanks
  4. Thanks, I recorded a little 2m video so you could see what's happening. I click on the dev to see the disk details and I am seeing the correct serial numbers. See images attached. Just trying to be 100% sure that the enclosure won't work before I go and exchange it because I had another version of this exact same brand and it's been working for years. One last note, I did have one of the disks shared in the previous enclosure and I do have a docker that references the cctv mount point...wondering if there's something cached somewhere that is messing up UD.
  5. How can I confirm? I can see the serial number and model numbers when I click on the disk link. lsblk also shows them as different devs and diff partitions and even when both are mounted the partitions are mounted at /dev/sbd1 and the other is /dev/sdx1.
  6. Hi there, I could not find the answer, sorry if duplicate. I am using Orico 5-Bay external disk enclosure and all disks in this extender are meant to be UD devices (not pooled devices). This is a USB3 device. It works fine as long as only a single bay is used with disk mounted. When I add the second drive, I cannot have a separate mount point and when I change partition name of any drive, all partitions on all drives within the enclosure receive the new partition name. It appeared that this was due to duplicate UUID, so I tried to change it via UD settings but I received Jul 13 06:10:37 pumbaa kernel: XFS (sdai1): Filesystem has duplicate UUID d97712e4-9e51-46b8-a133-f6ea96642d18 - can't mount Jul 13 06:10:37 pumbaa unassigned.devices: Mount of '/dev/sdai1' failed: 'mount: /mnt/disks/cctv: wrong fs type, bad option, bad superblock on /dev/sdai1, missing codepage or helper program, or other error. ' Jul 13 06:10:37 pumbaa unassigned.devices: Partition 'cctv' cannot be mounted. Jul 13 06:09:33 pumbaa unassigned.devices: Error: shell_exec(/usr/sbin/xfs_admin -U generate /dev/sdb1) took longer than 10s! Jul 13 06:09:33 pumbaa unassigned.devices: Changing disk '/dev/sdb' UUID. Result: command timed out I uninstalled and reinstalled UD. With any single drive, everything works ok but with a second drive everything gets goofy. Thoughts? Details Unraid 6.9.1 UD Version 2021.07.08
  7. No I'm using a Sabranet NVMe drive. I do have Samsung_Evos in unassigned devices, but not as cache. I am thinking my issue is the fact that my cache drive was encrypted and the upgrade didn't account for that... Glad to hear you're back up and running.
  8. I wound up having to do the same exact thing as @Aurial with one MAJOR difference. I can mount the disk using unassigned devices with no issue, I am in the process of moving all of my data to another disk so I can format and restore. I think the issue is that the drive was encrypted -- I didn't see any release / upgrade notes specific to handling encrypted drives. This seems like an oversight? Still working on restoring / setting up secondary pool with same encryption key to see if that works, will check back in UPDATE - Posting the log output of the array startup -- as you can see, it's claiming it cannot interact with the device and/or there's a corruption but unassigned devices can interact with it just fine. UPDATE 2 - I Reformatted my cache drive (btrfs encrypted) and restored everything -- the pool still cannot mount the encrypted cache drive. UPDATE 3 - I was finally able to get my NVMe (encrypted btrfs) working as cache for my pool but I did have to mount using unassigned devices, backup everything (had remote bkup but this was way faster), add it back to the pool as cache drive, start pool, Unmountable Filesystem still, click format in the bottom to format the cache drive (be sure it's only the cache drive there and no data disks), and then load all my data back to /mnt/cache. This worked. Back up and running and all my dockers are back along with everything else on 6.9.0 unraid_cache_drive_failure.txt
  9. That's awesome! Please let me know how it goes! I was planning to try it yesterday as well but ran out of time, it's on my todo list as well. Awesome! Well done.
  10. Hi @skois Thank you so much for your post on Face Recongition setup. Were you ever able to get it to work on external storage? I have mounted external storages as "Local". It looks like model 4 is the best right now, have you played around with the various models? Did you ever pass a gpu through to the container to let the GPU process the files? Thanks again! Edit 1: I found this -- I can't tell if the feature was merged or not but the PR was closed. https://github.com/matiasdelellis/facerecognition/issues/26 Edit 2: SOLUTION to EXTERNAL STORAGE https://github.com/matiasdelellis/facerecognition/wiki/Settings#hidden-settings occ config:app:set facerecognition handle_external_files --value true occ user:setting USER_ID facerecognition full_image_scan_done false Edit 3: DOCKER IMAGE POSTED WITH PDLIB installed I realized that the image got repulled each time the container was restarted and I didn't want to have to wait for the pdlib to install each time so I created my own docker hub image with the pdlib pre-installed. I plan to use nextcloud and to keep this image up-to-date but no promises. https://hub.docker.com/r/srfnmnk/nextcloud_face All you need to do to use it is to replace the repository / image location in your docker config. Edit next cloud, hit advanced view and edit the following two fields from linuxserver.io path to mine. The Dockerfile is exactly the same with the addition of the compiled pdlib -- use at your own risk obviously.
  11. Right, that's what I wanted to do but the organization of the files and data is nested and challenging to get to mount to the docker properly. As you said, it seems to be working. If that changes I will let you know. I have periodic backups of the databases so I could recover in the event of an issue.
  12. I am curious how the mover doesn't corrupt / break the WAL or other database files. Maybe mover doesn't move locked files? I would prefer to keep the db files on the cache in the /appconfig but figuring out how to get the nested mounts to work got a bit iffy.
  13. Yeah I think both "Yes" and "No" have merit depending on the circumstances. But prefer and Only are no-no I would think.
  14. Hi folks, I searched for high IO but couldn't find any references. below is an image of my cache drive IO with nextcloud docker image on. I added several "external storage" locations as "local" in nextcloud but other than that there's really nothing ON nextcloud. I do believe it's indexing photos metadata or something but I'm trying to find out why my cache drive is at 300-800 MB/s for nearly 24 hours after I install nextcloud. When I stop the nextcloud container the IO goes back to normal. I do also have my mysql database on this cache drive too which is really only being used by nextcloud ATM. Any ideas on why such high IO? Another thing to note, it's been writing at these speeds for nearly 24 hours, mover has not run, and yet, no major disk usage impact. The Used/free ratio really hasn't changed...any ideas here?
  15. I think you shouldn't use cache preferred -- you should use cache "yes" then mover will move it over and the df -h command will see the free space on your underlying drives, not the space on the cache. It's working for me with cache "yes". When I log into the docker image I also see the proper space for the mount as well.