Ockingshay

Members
  • Posts

    535
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Ockingshay

  1. many thanks, that fixed that error. Is there any way to run it as a custom ip address?, as i cannot connect to my other custom ip addressed dockers this way. I guess I may have to run this on a separate machine to allow access. It is the same issue with wireguard.
  2. Just did a new install and have ticked privileged and set a custom ip address. everything is default. when i try to "start the server" from within the webgui i get the following error:
  3. well thank you very much for your response! I did indeed have it ticked, so i did the classic of "un-tick and re-tick" and now it is showing correctly as like yours. Now i'm wondering if it was simply a graphical glitch or it actually hasn't been aggregated correctly all this time. I will have to carry out some tests to see if there's an improvement. many thanks edit: interestingly, i have had to re-edit all my dockers that used the custom network type as they failed to start and had reverted to network type "none"
  4. For many years now i have used Dynamic Link Aggregation (4) in the network settings tab and only ever configured it under eth0. I have always left eth1 blank (not configured) and i can see that it creates bond0. I use this with a Cisco switch and configured it's corresponding settings. It's only because after updating to 6.8.1 i thought i would have a look around all the webgui changes that have happened over the past iterations and see if there's anything exciting that applies to me and it made me ask myself; should i copy the eth0 settings to eth1 or continue to leave it blank and for the webgui to report it "not configured"? Does unraid automatically sort itself out? and would it not be better for the end user if the webgui displayed this better? or Have i been using this setting wrong for all these years?
  5. can someone please screenshot their docker page? i can get this to run in bridge mode (without or without privilege toggled) but it will not connect to devices. Just states in the webgui "waiting for device and eventually times out, although it's discoverable). I had this same issue with unifi controller and set it to host, but if i do that with this docker i get error: 2019/08/30 14:18:40 [emerg] 1954#1954: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
  6. and at the cost of performance (from what i'm reading), as the hdd has to process that. What that means in real terms i don't know, but if 4kn is the future and 512e a transitional stop gap why not run with it?
  7. Sorry, I keep editing posts and you are being incredibly kind and helpful! With ref to dell’s white paper, I plan on using 3tb WD red that I already have in the system. As this is not true raid, will 4kn be ok? Seems like I’m trying to mix things up too much.
  8. Thanks, Having unraid support it, will this impact anything else on the system? Dockers, vms etc? for simplicity shall i all I just stick with 512e? also, reading dell’s white paper on it they say; I plan on using 3tb WD red that I already have in the system. As this is not true raid, will 4kn be ok? Seems like I’m trying to mix things up too much.
  9. Thanks for the reply; Having unraid support it, will this impact anything else on the system? Dockers, vms etc?
  10. I am planning on using these drives in my new build and they come in various flavours. They are all similar prices, so it’s really what’s supported best and if there are any advantages. firstly it is SATA or SAS, and for simplicity I would choose SATA as that’s what the motherboard supports. then there is ISE (instant secure erase) or SE (secure erase). ISE looks useful for eol but are they compatible? finally there is 512e or 4kn. 4kn is the most modern, but again is this supported in Unraid? many thanks.
  11. 5) Case When i built my first server, the only criteria i had was the ability to house loads of drives. The beast that i ended up with was a Xigmatek Elysium. it is so big, that the motherboard looks like a little toy in the middle of it! However this case has lasted me so well, the fans are all still working, and being on wheels i can clean it out every year, so it's been a fantastic buy. Due to it's large size temps have been fantastic. CPU sits at 32oC and the MB 33oC all year round. Currently all my drives sit at 27oC and this never fluctuates. I think this has been the biggest impact on longevity. My "server" room is just a glorified cupboard in the middle of my house, but having lived here for so long, whenever we've renovated rooms we've added cat6 and so everything is hardwired back to this cupboard now. That also includes my landlines and POE cameras. Being an Electrician, it also has it's own dedicated circuit running off an isolating transformer and air conditioning. What i didn't appreciate the value of in those early days was hot-swappable cases. The ones that can be rack mounted and you can just replace drives as and when required. it's a bit of a session to do anything in the Xigmatek, so i really want to move to something like that. Fortunately in my line of work i have access to plenty of stuff like this and over the past couple of years i have slowly moved to rack mounting everything. In the end it will be whatever the server monkeys are upgrading and won't need anymore.
  12. 1&2) CPU & Motherboard This is what i've struggled with the most. It was easy 7 years ago, but again there are just so many choices out there now. What i don't like about my current setup is that my X9SCM doesn't support the iGPU of my E3-1270 V2, so i cannot use the hardware acceleration in plex transcoding. This server runs my entire house and I have about 15 dockers doing all manner of things. I need the cores to be able to handle all this and thanks to the AMD Threadripper, both camps have started to make many core CPUs part of the norm now. Things to consider is multiple transcodes, so that's AMD out of the question? Could i use a Threadripper and an nVIDIA GPU? i don't think dockers support GPU passthrough. If that is the case then an intel is the obvious choice, but what family? workstation xeons or these new prosumer i7/i9s? What i like about the xeons is that everything is geared towards server functionality. if i go down the i7/i9 route, i have to buy a "gaming" motherboard that costs an arm and a leg because it has useless features to me like RGB lighting, WiFi AC etc. What pulls me towards the i7/i9 is the functionality with 4K support. They just seem to be more geared towards this new generation of Movies/TV. So i'm really not sure what to do about this part. The other consideration is depending on the motherboard i will need to get ECC Ram or not. I would prefer to use ECC Ram as that has worked perfectly for me for the last 7 years, but again, i'm open to suggestions.
  13. 6) PSU I've always gone with Seasonic, so will probably just get another one of these with the best efficiency certification that's on offer.
  14. ) Array This has taken a bit longer to get my head around, as there are just so many different cards out there. I plan on using 10TB+ drives to negate the need for having so many drives in the system, reducing energy consumption and more components to go wrong. The AOC-SAS2LP-MV8s have served me well and are still available and cheap. I do get a warning from "fix common problems" telling me: I've never had an issue, but happy to give other controllers a go. I have noticed that there is now a bottle-neck in my system if i'm trying to watch a movie and i'm copying data across drives. This didn't happen when i had less drives, so i guess its a bandwidth issue? CPU is also a little high, so maybe it's also because of this. Supermicro do an AOC-S3008L-L8E, which has 12GB/s ports. Is that overkill for WD Red/Gold drives? I want to avoid this bottle-neck when the system is doing a lot of things at once. I want my server to be doing whatever it needs to be doing and not interfere with me watching plex. it has to be seamless. It uses PCI-E Gen.3 x8 Any other idea for controllers? EDIT: Well as seems to be the general consensus around here i will go with an LSI controller
  15. 3) Cache This seems like a good place to start and was actually inspired by this Spartacus Build. I'd not seen one of these before but really like the clean solution of using the ASRock Quad M.2 PCIe NVMe Gen3 Expansion Card. This allows me to install up to 4 nvme SSDs in a tidy little package and actively cooled. It uses PCIe Gen3 x16, so will need to make sure that the motherboard is able to support this. The other option is just to use 2.5" SSD attached to the motherboard's SATA ports.
  16. well, instead of getting you lot to do all the work, i thought i would use this platform to think out loud and go through the choices for each component and hopefully get some feedback. I'll flick back between posts to address each, and so it won't be in any particular order but most probably the order at which i find easiest to choose Components required: 1) CPU 2) Motherboard 3) Memory 3) Cache 4) Array 5) Case 6) PSU
  17. This is my current system that i've had now for 7 years and it's running out of storage and starting to feel a bit long in the tooth with what unRaid can do now and the UHD era: Supermicro X9SCM Intel® Xeon® CPU E3-1270 V2 @ 3.50GHz 32 GiB DDR3 Single-bit ECC 2 x AOC-SAS2LP-MV8 controllers 14 x 3TB WD Red array 2 x 3TB WD Red parity 2 x 1TB WD Black cache It doesn't run any VMs but it does run many dockers from plex to roon, to my cameras and network, so has to be pretty powerful. What i'm thinking and need support for the following: 1) Replace the 3TB drives with modern large capabilities 10TB+ to cut down on the amount of drives, but at the same time increase overall array size. 2) Replace the cache drives with SSDs 3) Requires dual GB ethernet ports, so that i can bond them 4) Are those controllers still relevant, or i've noticed some supermicro boards have 10 sata ports built into them 5) Similar sort of RAM 6) iGPU offloading for hardware acceleration in plex 7) Plenty of cores to handle all the stuff it now does 8] preferably IPMI as it sits in my server room, headless It'll need to be the intel route i believe, to get hardware acceleration working, but other than that i'm open to any suggestions. Budget isn't too much of a problem if the solution fits, but i would like it to be bullet proof like my current system. If you were to create the ultimate media server build, what would you use?
  18. i've gone back to rc3 but i will try again tomorrow. i shouldn't be running out of memory (hopefully) as i've got 32GB installed. stats at the time showed it as: 18.05GB free 5.76GB cached 7.24GB used thanks for the reply
  19. Finally got home from work to investigate and it's related to a couple of dynamix plugins: cron for user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null cron for user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null cron for user root /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null all we same error in body of email: /bin/sh: fork: retry: Resource temporarily unavailable
  20. I was having adoption issues when i changed to this docker and then i remembered i had the same problem before after an upgrade and i had to change the docker to "host" from "bridge". Restarted the docker and it has picked up all my devices. Might help someone if they have the same issue of devices stuck in an "adopting" loop.
  21. I’ve had 178 emails so far and counting. Is there something I can delete in plugins on my flash drive?
  22. I’ve had a whole loads of these errors sent to me via email overnight and continuing through the day. Happened since updating to rc4 cron for user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null /bin/sh: fork: retry: Resource temporarily unavailable
  23. Or in your router you could setup port redirection. I have mine as public redirect 443 to (local) dockerip:444