Shesakillatwo

Members
  • Posts

    34
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Shesakillatwo's Achievements

Noob

Noob (1/14)

6

Reputation

  1. I also experienced this issue with "binhex-krusader" several times on two different Unraid Servers. I had noted that it appeared that the "Extracting" and/or "Pull Complete" messages on a larger component fails to complete then the update jumps to the end of the update process over any remaining components. It is not always the same component where it fails. I also noted this issue today on my MariaDB Docker update which I ran a second time and it completed successfully. This feels kinda like a timeout issue or other failure to move to the Extracting or completion phase when updates are occuring. Any thoughts would be greatly appreciated. I defintiely do not want my MariaDB Docker corrupted due to a failure to complete a Docker Update.
  2. I experienced this issue the other day as well. So I did nothing for a few days and tried the update again last night and it worked. Ii appeared to me that when this issue occurs that one of the larger components/files does not actually reach the "Pull Complete" stage after which I still have the indicator that an update is available. I have also seen this before.... Could it be an Unraid Timeout issue???
  3. I have a Motherboard with built in ethernet. I am receiving a message through Fix Common Problems "Realtek R8168 NIC found" and "The default Realtek NIC driver is known to have issues, consider installing the driver plugin from Community Applications if you are having stability issues or trouble with networking. Search for: r8168 in Community Applications, install the plugin and reboot your server." I looked up on line and this Motherboard spec sheet says it has a "Realtek® 8111H Gigabit LAN Controller". This looks to be supported by the "RTL8168(B)/RTL8111(B) PCI Drivers" app if I am reading this correctly. I see this error but I am not experiencing and specifc issues with Stability that I have noticed. I have though noticed that if I copy several (5 to 10 for example) large video files to my media array after about 2 files copying at a speed of 100 GB, the whole process slows down to about 15GB to 30GB for the duration. THings do finish without error but the process takes a long time. I am not extremely techy at the command line and it feels like everytime I do something like this I end up with unintended negative outcomes. THese are often a challenge for me to correct. So, before I embark on adding this app/driver is this liely to correct the slowdown I see when copying multiple large files?? And are there any other concerns or should I simply ignore the error the comes into "Fix Common Problems" on this? Any thoughts would be greatly appreciated.....
  4. So can I just go into the isos and system share settings and change them from what they are now to Array---->Cache without doing anything else? Then just wait till mover runs and then go under Settings->Global Share settings and enable the option for Exclusive Shares??? Sorry, somewhat of a newbie and do not want to cause other issues....
  5. I have two unraid servers and while troubleshooting a different issue I noticed that the mover settings for the two servers are not quite the same for these two directories, isos and system. This seemed odd to me since they are both basically system folders. I am not having any problems or failures on this, but am a bit confused why things might be different and what the best setup is. I realize I have the option of choosing these settings regardless of what the defaults might be when building these servers but I am curious what the recommendations on these might be as they are system shares/folders. I do have cache pools on both servers and I do have two parity drives on each server. For the isos folder/share both of my unraid servers are set to Cache----->Array and both have cache as the primary storage. For the system folder/share one of my unraid servers is set to Cache----->Array and the other is set to Array---->Cache. Both again have cache as the primary storage. So, is there a best setup for best performance for this, and if I need to adjust any of these settings is that just a simple switch in the setup for these shares or do I have to do some other work?
  6. Every time I update some of my docker containers I get the "Docker high image disk utilization" message. Over time this seems to be increasing and now has reached 80% with recent Docker updates. If I look at the container size from the Docker page/tab I see this: Name Container Writable Log onlyofficedocumentserver 3.60 GB 353 MB 476 kB binhex-krusader 2.68 GB 0 B 36.1 MB freepbx 2.07 GB 32.0 MB 36.1 kB nextcloud 1.12 GB 239 MB 7 kB guacamole 781 MB 45.9 MB 16.0 kB swag 636 MB 154 MB 16.6 kB plex 340 MB 21.5 kB 3.70 kB mariadb 306 MB 23.4 kB 5.45 kB adminer 250 MB 0 B 36.1 MB vaultwarden 214 MB 0 B 17.8 kB heimdall 182 MB 51.5 MB 6.12 kB nginx 147 MB 25.2 kB 5.90 kB Total size 12.3 GB 875 MB 72.7 MB If I then look at the Settings page I see this: Docker volume info btrfs filesystem show: Label: none uuid: "ID Removed" Total devices 1 FS bytes used 20.21GiB devid 1 size 30.00GiB used 25.52GiB path /dev/loop2 btrfs scrub status: UUID: "ID Removed" no stats available Total to scrub: 20.63GiB Rate: 0.00B/s Error summary: no errors found SInce total to Scrub is 20.63 GiB, should I do this and will is reduce the size? I looked but could not find details on the Scrub button. Also, if this is not a solution, thoughts on what might be the issue here?
  7. I so much wanted this Docker to work but I had nothing but issues with it. I switched to running BI in a VM (had to buy a Windows 10 License for about $29 USD) and have had virtually no issues since. Maybe 2 years now? I also upgraded the VM to windows 11 since it was an option and through that it has continued to work very well. I also recall that when I made the change the Unraid CPU Usage was not really any higher with the VM than with the Docker so no real downside from my perspective..... Just an FYI.....
  8. OK, thanks so much. I must have misinterpreted some things I read on Drive sizes. I knew you could not have data drive space larger than the Parity drive space but I thought I read if I put a physical drive in for data that is larger than the Parity drive the data drive would only use the amount up to the Parity drive limit. Thanks for clarifying that this is not possible or correct!
  9. I have two 6TB Data Drives in my Server and all my data Drives are 4TB. I want to increase the size of several of my Data Drives from 4 TB to at least 6TB. However, when looking on line it appears that 8TB Drives are at about the same price point as the 6TB drives these days. I know and can put the 8TB data drives in place of the current 4TB drives but if I leave the current 6TB parity drives in place for now the new 8TB drive will only use 6TB. Now to my question: If I do this now to address my immediate need, and add larger parity drives later, will adding 8TB or 10TB Parity drives later immediately allow access to the remaining 2TB of space on the 8TB Data Drives I add today or will I need to do some other tweaks or adjusting to get the additional space for use?
  10. Thanks so much for the quick reply. I am glad to hear that if using the device mapping it does not matter what port my coordinators are plugged into. That puts that concern to rest! On the question of "Did you set them to auto connect in the mappings at VM start", I am pretty sure I did but when tinkering at the time I was first setting this up maybe I made a mistake? I will see how it goes for now and let you know if I see any additional issues. Probably user error..... BTW, this is an Awesome Plugin. Before I found it I was really struggling with how I was gonna make this work for my three Home Assistant coordinators so this was a gold find for me. Thanks!
  11. I just updated some of my USB Coordinator sticks for my Home Assistant VM running on unraid. I needed to install this plugin (which is amazing) because all three of my new coordinators were being identified as the same hardware which was causing issues with the Standard USB Pasthrough and as a result my Home Assistant would not start and displayed an execution error. After fumbling a bit I got this set up using Port mappings and it worked well. However it seemed if I rebooted the VM, not all the USB devices reconnected properly. I was able to reconnect them from the USB Manager GUI but I would prefer not to have to remember to do this everytime I reboot? So I updated my setting and mapped these coordinators as Devices which also worked. Have not rebooted and will not unless I have some issues or need updates. I was though curious if others had similar experiences and which version (port or device) you are using to map to Home Assistant? Lastly, with my coordinators mapped as devices versus ports, if I unplug a coordinator and move it to a different USB Port will it still be recognized and connect properly? I kinda assumed if I used the port method and connected to a different port when re-plugging the cordinator it might not be recognized. Thanks!
  12. I am experiencing an issue where my USB sticks ( I have a Sonoff, Nortek, and a 433 radio) appear to randomly loose connection with Home Assistant. How did you reset the boot order mentioned above, and what order did you use?
  13. I am installing Authelia today and had one issue that required me to run mysql_upgrade -u root -p against my existing MariaDB to get past an error in the log. I no longer see any log errors on start up but this is all I see in the Logs: time="2023-01-16T12:42:52-05:00" level=info msg="Authelia v4.37.5 is starting" time="2023-01-16T12:42:52-05:00" level=info msg="Log severity set to info" time="2023-01-16T12:42:52-05:00" level=info msg="Storage schema is being checked for updates" time="2023-01-16T12:42:52-05:00" level=info msg="Storage schema is already up to date" time="2023-01-16T13:02:56-05:00" level=info msg="Initializing server for non-TLS connections on '[::]:9098' path '/'" I did have to change the port I use from 9091 to 9098 as the 9091 port is alread in use on my network. I do not see an IP prior to the :9098 like in the install video but I and not sure why??? I am also unable to log into the app. Any help would be appreciated. Thanks!
  14. I have purchased a Domain and setup the Cloudflared Tunnel. I am using SWAG as the Reverse Proxy. I have edited the app.subdomain.conf files for several things and I have some end points accessible from the outside world and some are not. This is what I am trying to fix. I have two Unraid Servers and the Cloudflared Docker is installed on my "dataserver". I have a few other Dockers and two VM's which I want to be able to access through the tunnel. I cannot access the VM's or the one Adguard Docker on this machine I want to despite having tried many things. I am however able to access my backup Adguard instance (which is running on a Raspberry Pi) fine through the Cloudflared tunnel. I have configured both the main and backup Cloudflare tunnels the same except for their specific IPs so I do not understand why one works and the other does not. I do have separate subdomain.conf files for each. I was also hoping to access the "VM Console(VNC)" for one of the two VM's on this unraid Server but I cannot figure out how to configure this??? I can also not access a VM on the Second Unraid Server. Again, I do not know how to configure to get to the "VM Console(VNC)" on that VM. Any help on this would be appreciated. I can access my main router from the outside the network fine through router.my_domain.com and that works. I can also access some dockers on my second unraid server "mediaserver" including plex, nextcloud, guacamole and vaultwarden but I cannot access others like my freepbx docker.
  15. Did some more digging based on the results from the command you suggested. Found a similar post so I deleted the My Servers plugin and reinstalled it and all appears to be working now..... Thanks so much!