aspiziri

Members
  • Posts

    17
  • Joined

  • Last visited

aspiziri's Achievements

Noob

Noob (1/14)

6

Reputation

1

Community Answers

  1. Haven't seen it happen the past few days but will circle back with diagnostics if I run into it again
  2. Its intermittent for me so will see if I can spot a trend next time I see it. The last time it happened there were 3 containers in the update list: InfluxDB, Redis and a random other one. The InfluxDB and random other one are both decent in size (150MB+) so its possible it's related? This is what I see after the loop starts:
  3. Ah ok. I was on 6.9.3 for many years and finally recently upgraded and started running into this. So something introduced in 6.10 or later.
  4. I've been running into a bug with updating dockers on 6.12.3 recently where I'll have a few containers that have updates and when I go to "Update all" and it launches the popup it will go through and update everything but then continue to loop through the containers over and over again attempting to update. So the first pass, things update, then the second pass it doesn't find any new updates to pull for the container: "Status: Image is up to date for {containerName} TOTAL DATA PULLED: 0 b" and it just does this over and over and over again. There is no close button because Unraid thinks it is still updating so the only way to dismiss the modal is to refresh the page. I'm not sure if it is till then running on a loop under the covers until I restart? I took a quick look through the 6.12.4 patch notes and don't see anything about this mentioned so figured I would report.
  5. Got it - ok thanks. Sounds like I only have one path then so things are pretty clear: Use NerdPack to uninstall any existing "packs" Uninstall the NerdPack plugin Upgrade to 6.12.x Install NerdTools Reinstall any "tools" that I need Update my User Scripts, etc. to account for any changes that come from this (ideally none?)
  6. I am currently running 6.9.2 and have been looking into upgrading to latest and what it would take. One of the things I have found is that NerdPack was deprecated with the 6.11 release and has since been replaced by NerdTools from CA. My question is: If I simply perform the update to unraid what actually happens to the "already installed" nerdpack services? Do they continue working but with no way to manage them anymore (aka: uninstall everything first)? Do they disappear? If I install NerdTools and then re-install the corresponding services should all my user scripts continue working "as is" or do I likely have a bunch of tinkering in my future? Wondering if anyone already went through this process.
  7. According to the dockerhub page there is literally only a single listed known issue: "Saving and restoring settings backup via the BlueIris interface does not work!"
  8. +1 on this. I'd love to understand even at a high level the performance and resource usage comparison between the two.
  9. I'm hitting the same issue that alael is using the default settings. As soon as the docker starts it hits this and fails out. I followed the guide video to make sure I removed the old version, etc before trying to install and run the latest so not sure what would be going wrong. Edit: It appears that switching to method 2 for downloading in the template does not have this error but instead the logs seem to indicate the download never completes. When I open the macinabox_Big Sur.log file I see I am going to download BigSur recovery media. Please be poatient! . . Product ID 001-83532 could not be found
  10. [Solved] Unfortunately I tried many many things and don't know what specifically brought it back online but all good now. Yesterday we had an internet outage (not actually sure what went wrong but basically unplugged and replugged everything and it started working again) and for whatever reason every device in my home is back online just fine except for the unraid box which keeps pulling a 169.254.x.x IP which, from my research so far, seems to indicate it failed to get a proper DHCP lease. In addition to not being a correct IP from my DHCP server I also can't connect to this IP from another machine via the web UI. My router is a pfsense box which allocates 192.168.125.100 through 192.168.125.254 for dynamic leases and I also have a static IP outside of this range (192.168.125.10) configured for the Unraid box which it can't seem to grab. Also to be clear this configuration was working fine until the outage yesterday. Also every other device in my network is grabbing IPs within range and have internet access no problem. Things I have tried: Restarting unraid Restarting pfsense Using a different network cable from the unraid box to my switch Using a different port on the switch (Note: all other ports are in use and all devices besides Unraid pull IPs just fine) Deleting the static IP configuration in pfsense Re-adding the static IP configuration in pfsense Deleting the network config file on my unraid boot drive I have grabbed the diagnostics and attached them here in case anyone has any ideas. I don't have a ton of experience looking through these logs but this seems to be relevant: Oct 20 22:33:12 Apollo kernel: device bond0 entered promiscuous mode Oct 20 22:33:12 Apollo kernel: IPv6: ADDRCONF(NETDEV_UP): bond0: link is not ready Oct 20 22:33:12 Apollo kernel: 8021q: adding VLAN 0 to HW filter on device bond0 Oct 20 22:33:12 Apollo kernel: br0: port 1(bond0) entered blocking state Oct 20 22:33:12 Apollo kernel: br0: port 1(bond0) entered disabled state Oct 20 22:33:12 Apollo rc.inet1: polling up to 60 sec for DHCP server on interface br0 Oct 20 22:33:12 Apollo rc.inet1: timeout 60 dhcpcd -w -q -t 10 -h Apollo -C resolv.conf -4 br0 Oct 20 22:33:12 Apollo dhcpcd[2238]: br0: waiting for carrier Oct 20 22:33:13 Apollo kernel: ixgbe 0000:05:00.0 eth0: NIC Link is Up 1 Gbps, Flow Control: RX/TX Oct 20 22:33:13 Apollo kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex Oct 20 22:33:13 Apollo kernel: bond0: making interface eth0 the new active one Oct 20 22:33:13 Apollo kernel: device eth0 entered promiscuous mode Oct 20 22:33:13 Apollo kernel: bond0: first active interface up! Oct 20 22:33:13 Apollo kernel: br0: port 1(bond0) entered blocking state Oct 20 22:33:13 Apollo kernel: br0: port 1(bond0) entered forwarding state Oct 20 22:33:13 Apollo dhcpcd[2238]: br0: carrier acquired Oct 20 22:33:14 Apollo dhcpcd[2238]: br0: soliciting a DHCP lease Oct 20 22:33:19 Apollo dhcpcd[2238]: br0: probing for an IPv4LL address Oct 20 22:33:24 Apollo dhcpcd[2238]: br0: using IPv4LL address 169.254.35.72 Oct 20 22:33:24 Apollo dhcpcd[2238]: br0: adding route to 169.254.0.0/16 Oct 20 22:33:24 Apollo dhcpcd[2238]: br0: adding default route Oct 20 22:33:24 Apollo dhcpcd[2238]: forked to background, child pid 2280 Anyone have ideas on what could be going wrong or things to try?
  11. Update on my issue: It does appear the setting is no longer available on the edit VM page but it can be changed. You need to do it from the VM list. Click on the name of the VM in the list and it will expand to show more details. From this additional details is where you can edit the vdisk size.
  12. I've got the same issue on 6.8.3. Did we lose the ability to change vdisk size with the 6.8 upgrade?
  13. Thats the other option I've actually landed at in the meantime. I set everything to hidden and then mapped the specific directories on each device. It's not ideal but its livable at least until I find a way to do what I was hoping (if it is even possible).
  14. I've read many forum posts and articles about this but can't seem to get things to behave the way I'm hoping. Perhaps someone here can help me figure out if this is possible. Today I have a set of shares similar to the following: /personal-andrew /personal-christina /photos /music What I want to happen is when andrew connects with his user he sees #1, #3 and #4. When christina connects she sees #2, #3 and #4. Is there a way to achieve this? So far the closest I have been able to get is setting all of the shares to Export=Yes, Security=Private and then setting read/write vs No Access for each share. This controls access to the shares but everyone can see the name of every share. I'm hoping to filter the list to only ones you have access to.
  15. I love the unRaids pool configuration gives me total flexibility. I can set it to just handle everything for me and abstract away the complexity or I can slide and dice how my files are managed across drives and folder hierarchy. It allows someone to easily begin using the platform and then grow into more advanced management when they feel confident. In the future I would love to be able to include SSDs as part of the pool allowing me to configure certain shares to live on fast SSDs vs the rest being on spinning disks. Cache drives eliminate the need for some of this but would be cool to allow the flexibility for users to go directly to the pool with spare drives.