cr08

Members
  • Posts

    13
  • Joined

  • Last visited

cr08's Achievements

Noob

Noob (1/14)

1

Reputation

  1. This has started probably within the past month and seems to happen on roughly a weekly basis, but doesn't seem to have strict timing. What happens is all inbound network connectivity is seemingly lost. Pings work and using a port scanner it shows all my open ports, docker container ports, etc. but nothing can be accessed. Outbound seems to work fine. I have some scripts that access external resources that continue to work during this apparent dead time. I've also had syslog pointed at a Pi that has continued to send stuff over normally as well (sadly it looks like it went offline during this last occurrence so I don't have anything to provide in that area unless the attached diagnostics include something). Around the time this started occurring I did notice I got an alert about the whole macvlan/ipvlan thing. It never seemed to cause issues before but I went in and did the suggested changes but it doesn't seem to have helped here. This is a Dell Poweredge T20. Mostly stock. Xeon E3-1225v3 and a Perc H200 with a number of drives. Also a GTX 950 for transcoding duties. Unfortunately the 950 makes things a bit of a pain since Dell's design of this machine disables the iGPU with a dGPU installed and no config in the BIOS to change this. This also kills the KVM access via IPMI since it relies in the iGPU. So when this happens I'm basically forced to do a hard restart since there's no soft reset option via IPMI. Really frustrating given the system seems to be functional otherwise. Diagnostics are attached. Hopefully someone has some good insight on this. Such an annoying bug to deal with. Haven't done much with it so far beyond the ipvlan changes and haven't had a clue where to start troubleshooting. nas-diagnostics-20240203-2055.zip
  2. Just happened upon this plugin. Didn't realize the old CA Backup plugin was deprecated until I actually went to the plugin UI for it. I do like seeing individual configs per docker container. Nice add! Which brings me to a question: Is it possible to also allow custom scheduling per container? Some I would prefer to keep a daily backup while others I don't need nearly that often. Especially if it is capable of only stopping containers that are actively being backed up (If memory serves this was not possible in the old plugin and it stopped everything when a backup ran. But my memory is fuzzy on that. Not sure of that behavior on this plugin. Still going over it.). One problematic container for me that this would help with is the Unifi controller. I've been able to reproduce issues with my APs dropping some clients (mainly Kindle Fire tablets used as 24/7 wall mounted displays) when stopped/started requiring manual intervention on each device. Keeping the Unifi container off unless needed or temporarily disabling the backup process the issue has not come up. Being able to isolate this and keep it at like a monthly backup would be really nice.
  3. Awesome to see this! LXC has been something that I've sorely missed since migrating from Proxmox years back. Install was straightforward and got a Debian container going, no fuss. Anxious to see where this goes and crossing fingers that maybe this can be mainlined into Unraid proper down the road? Have had a handful of custom scripts that I haven't migrated to docker yet that run infrequently and it has been a terrible waste keeping a full fat VM running 24/7 just for these. Moving them back over to LXC is going to be such a nice change. One thing I do have to ask: Am I missing something to see the container memory usage? I'm seeing the other container stats but it's not showing the memory usage.
  4. Looks like another release was just pushed JUST a few minutes ago (a few minutes after I updated and saw this error as well) and seems to have resolved it.
  5. Dumb question, but is there an alternate method of this to install these on a -running- system? Seems very short-sighted that the only documented method requires a full system reboot to install packages. I also hope that a proper package manager is on the short list of features to be added to Unraid since we're getting pushed in this direction.
  6. Dumb question and I can't find it mentioned anywhere in this thread: Currently the tdarr app states "Tdarr_node included". However when setting this up it is not showing any nodes available. Anything I'm missing? I was hoping not to have to run a separate container since this is going to be a single node setup on the Unraid box itself. EDIT: Fixed. Apparently I had to dig just a LITTLE further and use different search terms in this thread to find the breadcrumbs I needed. TL;DR: Go to advanced settings in the container and set the Internal Node to true. It defaults to false.
  7. Subject TL;DR here. Basically I am curious if I am missing something obvious or if there's any existing mods or tools to have actual CPU and Mem usage (for a start) for individual VMs rather than being left in the dark or resorting to SSH. I've tried searching the forums here as well as /r/Unraid and haven't had much luck unearthing anything for this. I've been considering lately consolidating my machines down from an Unraid and Proxmox box to just Unraid and migrating VMs over but unfortunately the available stats for VMs currently leaves a lot to be desired.
  8. Been trying to find any concrete info on this one but haven't had any luck. Essentially trying to figure out how to set something close to a linux 'nice' level of CPU priority on a VM. What I mean by that is I don't want a hard limit or CPU core restriction but allow it to use full CPU resources when they are available but keep the VM at low priority so when something else needs to use a good chunk of the CPU, the VM will always take a backseat. Many results I have come across so far seem to mostly result in effectively restricting CPU cores to the VM in one form or another.
  9. It supports Nvidia GPU's just fine currently at least on the encode side and the decode side should be supported (hopefully) sometime in the near future as they update their transcoder to use the upstream ffmpeg 4.0 codebase. EDIT: Source: https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/
  10. Going to add my +1 to this as well. While an Intel iGPU is definitely servicable, when you need/want to upgrade to something that handles newer/better codecs, it is quite a bit easier to throw a GPU at the problem vs going through the hassle of doing a full system rebuild including a new motherboard and CPU and potentially RAM as well.
  11. Oddly enough all connection related settings insofar as counts that Deluge defaulted to is much lower than what is recommended for my connection (100mb/10mb). Only thing not set is the upload and download speed limits which I've never really had an issue with not having set in the desktop version. Also my router is relatively decent being a Ubiquiti Edgerouter X with Smart Queue enabled. Overall the internet connection hasn't been affected in this, JUST being able to access the dockers themselves. With that said I have went in and updated the bandwidth/connection settings in Deluge and will see how it fares going forward. So far with a few torrents it has been running fairly smoothly. I could have swore I turned that advanced settings toggle on and still did not see the option. However I have noticed for whatever reason I have been having intermittent issues with Chrome not behaving with the unRAID Webui. Right off the bat I did have issues with the initial array setup and when choosing disks out of the dropdowns it didn't register fully so it wouldn't allow me to start the array because it though there was no disks selected. I have essentially broken down to using Edge instead if anything feels wonky so this may be related here. So with that said, much thanks for that link! Was one I wasn't able to find earlier and actually has a screenshot and a little more info on exactly what I should be looking for.
  12. Nope. /data in the SAB container is pointed at /mnt/user/Downloads. I will give it another go after making sure the min free space options are set which is where I think I was running into issues there.
  13. Coming in as a newbie user. Been using a bit klugey setup up til now that consisted of a Proxmox install and mergerfs with the pooled drive fed to various VMs via bind mount. Finally got around to 'cleaning up' this setup a bit. Long story made short I am trying to give unRAID a chance and have it running on a Dell Poweredge T20. Here's my full specs: Dell Poweredge T20 Xeon E3-1225 v3 quad core no HT 12GB ram 2x 8TB WD Red's 120GB Kingston SSD 2TB Hitachi HDD 1TB Toshiba HDD All disks except for the 1TB attached to the onboard SATA controller, the last attached to an addon Asmedia card. All running at 6gb/s. I'll spare you all the unneeded details except to say that so far everything is pretty clean. Originally installed unRAID at version 6.6.1, updated to .2. Initially moved data over gradually via midnight commander just moving stuff over from unassigned device mounts to the user share and once everything was moved I fully set up the array. That in which is currently set up with all HDD's as a 4 disk array with no parity (temporarily, need to pick up another 8TB disk) and the SSD as cache. Got a few dockers set up. Namely Plex, Sonarr, Radarr, Sabnzbd, and Deluge VPN. All binhex versions and followed Spaceinvader one's youtube videos to get those all set up. Here's where the issues come in that I've ran into and hopefully I can get sorted before my trial runs out. (1) First is the cache disk and proper usage. Between of my media and downloads shares I have tried setting the cache to both yes and prefer and through the following issues I ran into eventually just set it to no for now to get things operating smoothly. What happened on either yes or prefer is in the process of various content being acquired via Sabnzbd primarily it would constantly fill up the cache and upon bumping up against that limit Sabnzbd would repeatedly stop and complain about lack of disk space, often in the unpacking process with the error message itself indicating it is from a lack of disk space. It was my understanding that the cache setting set to either yes or prefer should allow it to simply fail over to the array automatically if the cache disk(s) is full. This I did not see and had to manually run mover to free up some space before continuing. Eventually I do intend to add a parity disk and would like to have a cache disk configured at that time. (2) Initially I had set up Deluge VPN despite having run only usenet previously. After a little bit of deducing simply in the form of shutting off that docker, I have run into a big issue related to this particular docker/application. Basically during the time it has ran and is in the process of downloading anything, with the VPN enabled, the entire docker stack will seemingly freeze every few minutes. All dockers inaccessible and even the docker page in the unRADI UI will spin endlessly trying to load the docker list. What I did notice during this time is in the dmesg log there was an error listed regarding a potential SYN flood. As a test I disabled the Deluge docker and everything has been fine since. For now I am leaving it off until I can sort things out. (3) As part of this whole process as a result of adding more disk space and acquiring more 4k content, I wanted to try and have Plex make use of the onboard HD4600 iGPU if at all possible. All the existing search results and documentation seems pretty straightforward except for one massive hiccup that I don't know if I am just simply overlooking. All mentions of this process have you going into a mysterious 'Extra parameters' section of the Plex docker and adding "--device/dev/dri:/dev/dri" to it. What I am running into is nowhere, at least in 6.6.2, can I find this 'Extra parameters' field. I can definitely provide screenshots of any page if needed. I've even tried to look for fields that sound similar enough in case wording has changed but nada. (4) Lastly, and this is very application and non-unRAID specific but figured I'd add to this list anyways for the hell of it, is something that feels like it is a new occurrence related to the operation of Sabnzbd and/or Sonarr that I don't recall happening on my previous system. Essentially I have it set up for the usual tv and movies categories and configuration. I keep tv as high priority and movies as low. What I had happen a few times last night is say a movie begins downloading first and gets to the unpacking stage and is in that process. If a tv download starts in that time, completes downloading and unpacking, it kinda get stuck there if the movie is still unpacking. Sonarr will refuse to pick up that episode until the movie is complete. This all despite the set priorities. Let me know if there's any additional info, screenshots, or logs needed. I spent the majority of my evening last night trying to get this all set up and to a relatively stable state to the point of staying up much later than I desired so I didn't have the forethought to grab any error messages or screenshots.