Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Ockingshay last won the day on August 22 2018

Ockingshay had the most liked content!

Community Reputation

8 Neutral

About Ockingshay

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. trying to upgrade to this version and the download stops at 8%. is your link ok? it's teasing me! 18 minutes later:
  2. the developer has started top update this again, from various pull requests. Will these get picked up on the weekly linuxserver update? EDIT* as if by magic the update was already there for it...thanks! seems to be working a lot better now as well
  3. Drive management; Building in pre-clearing and so you can just plug in a drive and the gui prepares it, whilst the array is up and adds it with least disruption. ability to add and remove drives to grow or shrink the array in a very user friendly way. i have 15 x 3tb drives and as with all technology moving on, I can now buy 14tb drives. Would be great to be able to remove 4 old drives and replace with a single drive. All intuitively managed by the gui. baked in unassigned drive support. basically, very easily managed drive support to give users confidence to upgrade/change things around when required, which at the end of the day is exactly what unraid core functionality is all about....drive management.
  4. many thanks, that fixed that error. Is there any way to run it as a custom ip address?, as i cannot connect to my other custom ip addressed dockers this way. I guess I may have to run this on a separate machine to allow access. It is the same issue with wireguard.
  5. Just did a new install and have ticked privileged and set a custom ip address. everything is default. when i try to "start the server" from within the webgui i get the following error:
  6. well thank you very much for your response! I did indeed have it ticked, so i did the classic of "un-tick and re-tick" and now it is showing correctly as like yours. Now i'm wondering if it was simply a graphical glitch or it actually hasn't been aggregated correctly all this time. I will have to carry out some tests to see if there's an improvement. many thanks edit: interestingly, i have had to re-edit all my dockers that used the custom network type as they failed to start and had reverted to network type "none"
  7. For many years now i have used Dynamic Link Aggregation (4) in the network settings tab and only ever configured it under eth0. I have always left eth1 blank (not configured) and i can see that it creates bond0. I use this with a Cisco switch and configured it's corresponding settings. It's only because after updating to 6.8.1 i thought i would have a look around all the webgui changes that have happened over the past iterations and see if there's anything exciting that applies to me and it made me ask myself; should i copy the eth0 settings to eth1 or continue to leave it blank and for the webgui to report it "not configured"? Does unraid automatically sort itself out? and would it not be better for the end user if the webgui displayed this better? or Have i been using this setting wrong for all these years?
  8. can someone please screenshot their docker page? i can get this to run in bridge mode (without or without privilege toggled) but it will not connect to devices. Just states in the webgui "waiting for device and eventually times out, although it's discoverable). I had this same issue with unifi controller and set it to host, but if i do that with this docker i get error: 2019/08/30 14:18:40 [emerg] 1954#1954: bind() to failed (98: Address already in use) nginx: [emerg] bind() to failed (98: Address already in use)
  9. and at the cost of performance (from what i'm reading), as the hdd has to process that. What that means in real terms i don't know, but if 4kn is the future and 512e a transitional stop gap why not run with it?
  10. Sorry, I keep editing posts and you are being incredibly kind and helpful! With ref to dell’s white paper, I plan on using 3tb WD red that I already have in the system. As this is not true raid, will 4kn be ok? Seems like I’m trying to mix things up too much.
  11. Thanks, Having unraid support it, will this impact anything else on the system? Dockers, vms etc? for simplicity shall i all I just stick with 512e? also, reading dell’s white paper on it they say; I plan on using 3tb WD red that I already have in the system. As this is not true raid, will 4kn be ok? Seems like I’m trying to mix things up too much.
  12. Thanks for the reply; Having unraid support it, will this impact anything else on the system? Dockers, vms etc?
  13. I am planning on using these drives in my new build and they come in various flavours. They are all similar prices, so it’s really what’s supported best and if there are any advantages. firstly it is SATA or SAS, and for simplicity I would choose SATA as that’s what the motherboard supports. then there is ISE (instant secure erase) or SE (secure erase). ISE looks useful for eol but are they compatible? finally there is 512e or 4kn. 4kn is the most modern, but again is this supported in Unraid? many thanks.
  14. 5) Case When i built my first server, the only criteria i had was the ability to house loads of drives. The beast that i ended up with was a Xigmatek Elysium. it is so big, that the motherboard looks like a little toy in the middle of it! However this case has lasted me so well, the fans are all still working, and being on wheels i can clean it out every year, so it's been a fantastic buy. Due to it's large size temps have been fantastic. CPU sits at 32oC and the MB 33oC all year round. Currently all my drives sit at 27oC and this never fluctuates. I think this has been the biggest impact on longevity. My "server" room is just a glorified cupboard in the middle of my house, but having lived here for so long, whenever we've renovated rooms we've added cat6 and so everything is hardwired back to this cupboard now. That also includes my landlines and POE cameras. Being an Electrician, it also has it's own dedicated circuit running off an isolating transformer and air conditioning. What i didn't appreciate the value of in those early days was hot-swappable cases. The ones that can be rack mounted and you can just replace drives as and when required. it's a bit of a session to do anything in the Xigmatek, so i really want to move to something like that. Fortunately in my line of work i have access to plenty of stuff like this and over the past couple of years i have slowly moved to rack mounting everything. In the end it will be whatever the server monkeys are upgrading and won't need anymore.
  15. 1&2) CPU & Motherboard This is what i've struggled with the most. It was easy 7 years ago, but again there are just so many choices out there now. What i don't like about my current setup is that my X9SCM doesn't support the iGPU of my E3-1270 V2, so i cannot use the hardware acceleration in plex transcoding. This server runs my entire house and I have about 15 dockers doing all manner of things. I need the cores to be able to handle all this and thanks to the AMD Threadripper, both camps have started to make many core CPUs part of the norm now. Things to consider is multiple transcodes, so that's AMD out of the question? Could i use a Threadripper and an nVIDIA GPU? i don't think dockers support GPU passthrough. If that is the case then an intel is the obvious choice, but what family? workstation xeons or these new prosumer i7/i9s? What i like about the xeons is that everything is geared towards server functionality. if i go down the i7/i9 route, i have to buy a "gaming" motherboard that costs an arm and a leg because it has useless features to me like RGB lighting, WiFi AC etc. What pulls me towards the i7/i9 is the functionality with 4K support. They just seem to be more geared towards this new generation of Movies/TV. So i'm really not sure what to do about this part. The other consideration is depending on the motherboard i will need to get ECC Ram or not. I would prefer to use ECC Ram as that has worked perfectly for me for the last 7 years, but again, i'm open to suggestions.