danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. Acting uncharacteristically extreme sometimes when we get hurt is very human and understandable and my personal opinion is that is what a few of the fellas at @linuxserver.iodid after the exchanges on the previous thread. I have no reason to doubt the sincerity of this post by @limetech and therefore was hoping there would be some return comms from @linuxserver.io to do their part in repairing this bridge. It might be nice if those who “retired” or decided they were now “out” or were “quitting unRAID work” came back. It might also be nice that if some of the support threads that were locked are now unlocked, some links reinstated where appropriate, edited posts that now read “Depreciated” are replaced with more helpful information and we can start a joint and peaceful transition to the new official unRAID build with appropriate guidance for the community members that might have missed the recent exchanges. Personally, at very least I’d like to see an acknowledgment of this post from the team if they are just not quite ready to move on yet - which is also understandable - wounds don’t often heal overnight. Paraphrasing @aptalca, no one likes a one way street. If virtual hands could be shaken here ... in an ideal world (for me) publicly then the symbiotic unRAID centric relationship between a big community contributor and the company can continue and we all move on together! 🙂
  2. As much as I felt I had to post on the beta release thread I felt the need again to post here for this very nice and sincere post. “An apology is the super glue of life. It can repair just about anything.” - Lynn Johnston A great point and note on which to pivot and move on. I appreciate you for making this post Tom @limetech.
  3. Following @CHBMB’s request to have the support thread locked (and that request being actioned) along with his comment that all development and support for it has now ceased following @limetech announcement it wouldn’t surprise me if that App has been removed from CA altogether. CA is also a community app and the developer AFAIK still has a close relationship with the @linuxserver.io team. It appears therefore that to use Nvidia drivers with any future release of unRAID you must use the stock build (which now has them in of course). How to configure your dockers to use those stock builds is another thing and something I haven’t researched yet.
  4. I’ve been around a little while. I always follow the boards even though I have very little life time to give to being active in the community anymore. I felt the need to post to say I can completely appreciate how the guys at @linuxserver.io feel. I was lucky enough to be apart of the team @linuxserver.iofor a short while and I can personally attest to how much personal time and effort they put into development, stress testing and supporting their developments. While @limetech has developed a great base product i think it’s right to acknowledge that much of the popularity and success of the product is down as much to community development and support (which is head and shoulders above by comparison) as it is to the work of the company. As a now outsider looking in, my personal observation is that the use of unRAID exploded due to the availability of stable, regularly updated media apps like Plex (the officially supported one was just left to rot) and then exploded again with the emergence of the @linuxserver.ionVidia build and the support that came with it. Given the efforts of the community and groups like @linuxserver.io is even used in unRAID marketing I feel this is a show of poor form. I feel frustrated at Tom’s “I didn’t know I needed permission ....” comment as it isn’t about that. It’s about respect and communication. A quick “call” to the @linuxserver.io team to let them know of the plan (yes I know the official team don’t like sharing plans at risk of setting expectations they then won’t meet) to (even privately) acknowledge the work that has (and continues to) contribute to the success of unRAID and let them be a part of it would have cost Nothing but would have been worth so much. I know the guys would have been supporting too. I hope the two teams can work it out and that @limetech don’t forget what (and who) helped them get to where they are and perhaps looks at other companies who have alienated their community through poor decisions and communication. Don’t make this the start of a slippery slide.
  5. My understanding is that IDRAC ports are similar to SuperMicro (and others) dedicated ports for server management (ie you cannot use them on your server as an available interface for your OS - in this case unRAID). You “could” (I think) go down the route (if you have a managed switch) of using your spare RJ45 port as the port for PfSense (available to the VM as a bridge) and use VLANS for LAN and WAN. I won’t elaborate on that. I think the best bet though is to use your Intel SFP+ card. I’m making assumptions that you’re not going 10G here and you just want to use your SFP+ ports as additional RJ45 ports for pfSense. A quick search found this link appearing to relate to someone wanting similar SFP+ to RJ45 modules for your Intel card: https://forums.servethehome.com/index.php?threads/intel-82599es-transceiver-compatability-sfp-to-rj45-1g.25729/ If you can get ones that work, then I imagine you can just pass that card through to the VM (assuming you server supports it) and use them for LAN/ WAN respectively. Then off you go. Sent from my iPhone using Tapatalk
  6. I’d love the ability to assign multiple networks to a single Docker container. The simplest use case involves pihole. Currently I run seperate instances of a pihole container for each of my VLANS. It would be great if I could just attach one container to each network and configure pihole to listen on all assigned interfaces. I’m pretty sure you can do it via the cli using ‘docker network connect <network> <container>‘ or something but it would be excellent to be able to do it in the GUI.
  7. Watching the thread via email updates and had to chime in. I like this one, good job!!
  8. I currently have all my networks restricted to using pihole which is set to use unbound (locally hosted in pfsense) to resolve dns queries (where the request isn’t cached). I do not use any external dns service. What benefit would introducing this software into my setup give me?
  9. As someone who uses VLANS in my setup I’d love the ability to choose which interface the management access features of unRAID (e.g. GUI / SSH etc is bound to). Select either / All etc. I’d like to have unRAID deployed to my management VLAN along with my other network gear and my main VLAN where all my file access occurs. I can do this of course but I’d like to be able to prevent clients on the main VLAN from accessing the GUI and SSH etc (not possible of course via FW rules as the traffic on that VLAN doesn’t go back to the router to trigger a block rule) and just have file sharing enabled. Right now I can deploy unRAID to X VLANS and on each IP (required to be assigned for file sharing) the GUI / SSH is accessible.
  10. Best way to answer a question you have is to sometimes write it down. This has worked in my case in that I re-read what I wrote and thought - that sounds strange. RTFM!! So I went and read the Docker documentation. In 2 mins I found what I needed. https://docs.docker.com/config/containers/container-networking/ Key part being: This makes it (sort of) clear why the option doesn't exist in unRAID. Either way, I was able to add --dns X.X.X.X to the docker run command via the advanced mode in the container setup page and low and behold, external DNS was set in the container.
  11. Hello, The unRAID interface does not allow for specifying DNS settings. One would assume this is by design (or by restriction) as this seems something quite obvious to omit. I have set my secondary NIC up against a port on my switch which is assigned to a VLAN which pfsense routes through my VPN gateway. I have the interface to grab an IP from my VLAN DHCP server (which it does) but Id like to just set the VPN DNS Server directly rather than the way I have unraid setup (unraid>Pihole>pfsense(unbound)). I have bridging enabled on that interface, which I then use to assign an IP to a docker container. This works great, and traffic is routed as I expect through my VPN via the VLAN. That is until I need name resolution. My question is, how do I achieve what I want (which is being able to set a set of different DNS servers for the secondary interface) assuming that, as I say above, this is by design or restriction? I am creating a new topic as there have been a number of similar topics created in the past but have had no answers. I cannot find a topic through the forum search that answers this question. Any info on this would be helpful. Thanks Daniel
  12. Hello All, I am optimising with my network design to ensure that potentially threatening (IOT, Cameras, Guests etc) are segregated from my main internal network. I will utilise VLANS for this. I intend on utilising a second NIC to give my unRAID Server access to my main LAN and my Camera VLAN. Easier this way as I don’t have to setup inter VLAN routing. Pfsense makes it easy for me to restrict all but SMB traffic between Cameras and unRAID (thus protecting unRAID). The only thing I can not figure out is how to restrict access to shares by Network. What id like to do is only allow my camera “user” to logon while on my Camera VLAN and once it does so ONLY be able to access 1 share. This would mean no other user would be able to login to the server on the Camera VLAN. The threat I am trying to defend against is a device on the VLAN becoming compromised, opening access to the server and through luck and or other means getting access to my other shares. Is there a way to restrict share access based on network In the OS that anyone knows of? Thanks Daniel
  13. Are you after backup or file sync? Each have their own use cases and are different (while they might help serve similar goals in some cases). If it’s the former then I use Duplicati (Docker available) If it’s the latter then I use Syncthing (Docker Av/liable) Both are stable and sold. Duplicati is true backup in that it has versioning, archiving etc. Syncthing is best used to keep locations in Sync and can be wowed for backup as it’s bi directional. Not sure what feature in Nextcloud you could use to serve as your backup solution unless there is a app you can install within it that does something similar to the above.
  14. I started a thread in 2015 which culminated in my current setup: Main Server M/B: Supermicro X10SL7-F Version 1.01 CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz Memory: 32 GiB DDR3 Single-bit ECC Disk Capacity: 71 TB w / Dual Parity Backup Server M/B: ASRock C2550D4I Version CPU: Intel® Atom™ CPU C2550 @ 2.40GHz Memory: 12 GiB DDR3 Single-bit ECC Disk Capacity: 15 TB w / Single Parity I have no issue at all with my data requirements. These machines are NAS workhorses and serve my needs perfectly. Where I am starting to come up short are the ‘extra’ things I’d like to do with my servers. I’m not a ‘one box rules them all’ fan given I have a family and with the way unRAID is, taking the array down (or a problem with a single disk) can bring your house to halt so I stay away from running everything from one machine. I’d love to do more with my backup server but it doesn’t support vt-d so I only have it power on once a day to take the backup from the main server. I run Home Assistant and PiHoles (mainly due to the ridiculous writes this thing does) on small little NUCs sat alongside my servers. All my servers connect into one Gigabit (except the Dedicated IPMI which I feed into the router) switch (dual Nic aggregate bonds) which connect into an Asus AC88U. On my Main Server I currently run LibreELEC VM which powers my Livingroom TV and a number of dockers including Plex supporting that. I have an Nvidia Card in there (on a riser cable) which means I don’t transcode and all my house clients are Rpi4’s and a 4 port Sata extender which has allowed me to increase my drive capacity to 16 Disks (mostly 8TB with a few original 3TB for data and parity and a 1TB SSD for Cache along with a 259GB SSD for the VM). My backup server is full of WD 3TB Greens with a mechanical drive as cache (running the backup docker and a few non relevant others). A good setup all in all (except some of my SATA ports on my Backup server are damaged - not an issue as the Case only allows 8 disks). As I have inferred my needs are expanding. I don’t however want to centre myself away from unRAID. So I am thinking about how to do that. I am preparing myself to move into a new home. I want to setup a NVR (BlueIris) VM which will run 8 POE Cams. I’m going to need POE switches for the Cams and a better solution to my Wifi needs as my new home is massive and while the little Asus worked in an apartment it won’t work in a large single storey house. I’m going to need POE switches, Patch panels, Access Points. Those are a no brainer as UI has excellent products for them. Going to go with Pfsense for my router and BluIris as I mentioned for my NVR do not going anywhere near the ‘Alpha‘ UDM Pro and their NVR solution in ‘Protect’. Which means I will need to expand my current setup to acquire surveillance drive(s) and a VM powerful enough to run BlueIris. I will likely move my TV client to a Rpi4 as server and equipment will end up in the garage (and not near the TV as it is now). I’d also like to be able to virtualise things on my backup server (e.g while I’m not a one box rules all fan, my backup server would be a perfect solution for pfsense if I could pass through a 4 port Nic) but can’t. I keep hoping LT will allow VMs without Array start but I won’t hold my breath. I’m open to my Backup Server running 24x7 like my Main Server (given my pfsense comment above) but I’ll need to upgrade it to allow virtualisation support So .... What I am thinking is this: Move my current mITX SuperMicro Main Server setup into my backup Server case. Use my mITX Atom Backup Server Setup into a SFF case (with an external PSU) as the base for a Pfsense router (with the 4 and 2 port Intel NICS I have in a drawer upstairs) and ultimately build a new 2020 Main Server. If I end up getting a rackmount case for my main server (see below) I’ll ditch the Silverstone DS380 that houses my backup server and use my Main Server Fractal Define R5 case as my backup server. As you can see from my current Main Server, I didn’t cheap out on components. I went with very much recommended server grade products for the time for all the features (ECC, IPMI etc) and I don’t want to loose them. Unlike back then, the Wiki doesn’t have a current recommended build and I am not sure where to turn (given we now have Ryzen with more cores than a MF) and how to proceed. My initial thought was to just buy the natural upgrade to the x10sl7-F but there isn’t one!!! Certainly not one with as much Sata ports either. So I’m going to need an expandable card. Before I know it I’m outgrowing my current case (in my head) needing more PCIe slots and am thinking (given the network rackmount equipment I’m buying) I should build into a rack case - however the selection there isn’t simple either. Silverstone Tech had a great hot swap 16 Disk 2U case but They discontinued it. Others I’ve seen look crap. Long Story Short (too late I know), I believe I built the 2015 SOHO build and now I am after the 2020 SOHO Plus build. I know we all (deep down) love talking hardware and this is the prelude thread to an updated build thread I did in 2015 so am after suggestions, comments and general discussion on how to move forward! Much like my journey in 2015, this should be a fun 6-12 months!!! Glad it’s my 40th birthday year and have some approval to spend some $$!!
  15. Without some specific errors I’m not sure I can help. When I was running Core in Docker in unRAID mine never dropped off or stopped working. I had morning, daytime, evening and night automations and all worked every day. I did have an issue with some of my TP-Link switches - configured so HA saw them as lights not “reconnecting” each day. This was because I had my dockers set to be backed up at 3am each morning and there is a bug in the TP-Link FW when you have them running local instead of via the cloud. Other than that (which really had nothing to do with HA) my setup was solid. Sent from my iPhone using Tapatalk
  16. Yes, I did (until recently) and it worked flawlessly for months. When you say unstable, what do you mean? Now I run it in a VM on a dedicated NUC running Ubuntu. Taking the server down for maintenance, tweaking or WhateverTF was not very wife or family proof. One machine to do it all doesn’t really work for me these days. Sent from my iPhone using Tapatalk
  17. This was a very interesting and informative post. So much so that I think it is a candidate for a sticky. My big take away from this is that really is no "minimum hardware requirement" for unRAID when you are utilising much of its virtualisation activities and have also added various plugins. Using your advice, I found the "Sweet Spot" for my configuration which means the VM didn't stutter and the host has more than enough resources. Thanks for your help.
  18. https://wiki.unraid.net/The_parity_swap_procedure Sent from my iPhone using Tapatalk
  19. Woah, that worked. No stutter. Its not like the VM was using that many Cores even though it had them allocated and there were at least 2 allocated to unRAID. Surely unRAID doesn't need all those 6 cores not allocated to the VM. Is this a known bug?
  20. No worries. We all come a cropper of the decisions we make at setup only to find those decisions impact smooth operation sometimes. AFAIK there is no way of backing up without stopping. If there was I am sure the CA dev would have implemented it that way. You could just let all users know that each day there will be downtime. I’d say your only way forward if you don’t want to do that though is to choose a time of day when you have least usage (maybe 5am) to limit impact. Or you could reduce the frequency of your backups. Sent from my iPhone using Tapatalk
  21. Do you have CA Backup / Auto Update installed and set for 2am!? In order to backup / update Docker, CA has to stop them . Sent from my iPhone using Tapatalk
  22. Hi, I have a weird issue that I cannot resolve. I have my living room TV displaying a VM on my unRAID server which is running LibreELEC. It is playing media that is on the same server. Media is spread across many different disks. My Server Hardware is: M/B: Supermicro X10SL7-F Version 1.01 - s/n: NM149S013462 BIOS: American Megatrends Inc. Version 3.2. Dated: 06/09/2018 CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz HVM: Enabled IOMMU: Enabled Cache: 256 KiB, 1024 KiB, 8192 KiB Memory: 32 GiB DDR3 Single-bit ECC (max. installable capacity 32 GiB) Network: eth0: 1000 Mbps, full duplex, mtu 1500 No other VMs are running and I don’t have a great deal of resource intensive Dockers running either. The VM is stable as. Plays 4K video perfectly even while a couple of other clients are streaming transcoding etc. The VM is allocated 6 of 8 cores and 8GB RAM. That is, until I access the unRAID GUI. When I do that, the video playback / VM stutters for 2 seconds. I've observed a spike in CPU usage briefly in one or two of the cores at the same time but nothing on all of them and not to 100%. I thought it might be due to the Cache disk also running other dockers and IO usage so I switched it to a SSD outside of the array on UAD. No difference. Weird, it only happens on The Dashboard and Main Tabs. None of the others. There is absolutely nothing in the logs and I have trawled the diagnostics. Ive tried different video from each disk of varying size and type. I’ve disabled all dockers and plugins and just run the VM and still, when I access the unRAID GUI the VIdeo stutters. Similarly, I’ve stressed the server to the point of overload and with my settings the VM plays perfect until I access the GUI then stutter. I have no idea how to progress this.
  23. However, I can reconstruct this disk off of that recent parity sync successfully though right? I haven’t been able to do a parity check on it since I’ve had the read errors!? Sent from my iPhone using Tapatalk