Flubster

Members
  • Content Count

    20
  • Joined

  • Last visited

  • Days Won

    1

Flubster last won the day on May 9 2020

Flubster had the most liked content!

Community Reputation

12 Good

About Flubster

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not sure if the UDMP is different from my USG but i didn't need additional WANIN rules other than the default port forward rules, (the 3000 rules) Try disabling them for a start. See if that gets you anywhere. I did have some very bizzare issues with VLANS and the USG so it can be very frustrating, it forced me to setup a syslog server just to get the firewall detailed logs to fix the issues. Flub
  2. So, Over the last few months i've been gradually retiring some old 2gb SAS drives (with escalating "Elements in grown defect list" errors) and replacing with some SATA 4Gb's. As I run dual parity I could now retire my second shelf if I moved the drives into the spare bays in the server / shelf 1. (I've been using the zero method after clearing the data from the erroring drives - then removing when cleared) From what I understand UnRAID doesn't care where the drives are physically - they are tracked by serial - so can someone confirm if I fill the redundant b
  3. I've got a 2 drive Cache pool in btfs RAID1 HP Ilo is now reporting predictive failure (Its Power On Hours are @ 44035 (5 years eek! - SSD has done well!) which I suspect has annoyed ilo! - Have purchased a replacement drive - same size. I have read the FAQ and want to check the process is still valid - Mount new HDD into Server (I do have enough spare ports to have them both connected at same time) Stop Array Swap failing cache drive with new drive in array config Start Array Wait for btfs balance - when complete "stop array" will become
  4. yup all good, now using CUDA 11.10 when pulling the cuda11 tag Dave
  5. If you used the linuxserver nvidia plugin (before it was pulled) then yes, you are on 440.59. If you use the prebuilt kernel files from here: you can update the 6.8.3 nvidia drivers to CUDA 11. (I'm on 6.8.3 and on driver 450.55 - CUDA 11.0) - I see its now on 455.45.01 Usual disclaimers apply on if you think its worth playing about - i did as I needed it further up to date for plex transcoding. Dave
  6. The example config.json that was pulled before the latest update has syntax errors (no space before the colons) and therefore are ignored. Either add a space on the api-bind-http variable or pull config.json from the GitHub as it doesn't update once pulled. Dave
  7. Yeah when I next take down the array I'll attempt to run the diagnostics for you. I'm not surprised i'm a tad of a fringe case, running a h221 with a msa60 hp enclosure (12 disks), with 12 spare disks unconfigured on the second channel..... therefore if this would even be a supported configuration it could be msa60's backplane firmware etc. God bless HP and it's oddities Dave
  8. Install the plugin today: Feb 13 18:35:55 demiplane kernel: mdcmd (144): spindown 14 Feb 13 18:35:55 demiplane SAS Assist v0.85: spinning down slot 14, device /dev/sdr (/dev/sg19) Feb 13 18:36:07 demiplane kernel: mdcmd (145): spindown 2 Feb 13 18:36:07 demiplane SAS Assist v0.85: spinning down slot 2, device /dev/sdc (/dev/sg3) Feb 13 18:36:08 demiplane kernel: mdcmd (146): spindown 9 Feb 13 18:36:08 demiplane SAS Assist v0.85: spinning down slot 9, device /dev/sdj (/dev/sg10) Feb 13 18:36:27 demiplane kernel: mdcmd (147): spindown 6 Feb 13 18:36:27 demiplane SAS Ass
  9. Not that you want to hear this - but my recipes and meal plans migrated to version 3 with no issues at all. Dave
  10. Nope, i forked the repository and updated the version. (To use my version) edit your docker and change your repository to flubster/gsdock and it'll pull my updated version You'll need to add the mapping i stated above as i haven't created any UnRAID template, just pushed another docker image. YMMV. Be careful as I started from scratch (after pulling the initial image from the template) so haven't tried any upgrades etc Rubberbandboy
  11. The author changed the github, but didn't push to dockerhub to rebuild the image. After manually updating the container I noticed on 11.3.3.2 (possibly 11+) that for it to register - I needed to create a folder mapping to container path /etc/goodsync/ as it appears the license is mapped here not where it was before. Rubberbandboy
  12. I had the same issue last week 😞 I managed to correct the filesystem with testdisk on a Windows machine using a usb caddy. YMMV. I got the data off then started again. Be warned testdisk can make any recovery impossible if you select the wrong options! Testdisk can be difficult in usage and operation! So good luck, some better GUI based tools may exist, I've always used testdisk personally as in I have had a Linux LVM raid array die before and was the only thing that could read it let alone restore the filesystem. Dave
  13. I'm using a Cron job using rclones WebDAV remote to copy objects to my nextcloud instance locally. And as it's using a WebDAV connection I don't need to worry about the occ commands or permissions as nextcloud deals with the files directly as if you uploaded them via GUI or web. All my local file changes are synced to UnRAID file shared on my local laptop shutdown, which in turn every hour syncs to nextcloud via a custom Cron container so it'll be available remotely. (They are not edited remotely just viewed) but I can't see any reason why you couldn't sync both ways if
  14. package clamav-libunrar and unrar is missing from the dockerfile, also clamav is out of date as the dockerfile has a specific version installing rather than the latest in the alpine packages. You can fix (until the maintainer sorts) by: opening a console apk update apk del clamav apk add clamav apk add clamav-libunrar apk add unrar then restart the container. Dave
  15. If you look here you will see it's a direct pull from another image, all spaceinvaderone has done is provide the XML for the unraid definitions. Maybe research the base image to see if any missing variables you can manually add. I personally migrated to the migoller/shinobidocker:microservice-ffmpeg image so I can point at my own mariadb docker. Dave