GazzaShergar

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Personal Text
    CPU: Xeon E5-2630L v3. Mobo: Asus X99-E WS. RAM: 32gb Corsair Vengeance LPX 2400. GPU: GTX1050ti. Network: 10Gbe Sfp+ 
    Cache: 512gb Samsung Pro NVME. Array: 2x8tb (1xparity) 9x3tb + 1x3tb hot spare. 38TB Usable

GazzaShergar's Achievements

Noob

Noob (1/14)

1

Reputation

  1. that's the thing that has had me scratching my head, the old docker was left largely default because it worked and truth be told I'm an idiot and I don't mess with things I don't understand. The new docker was no different, everything was set out of the box. I have managed to get a new docker working. I tried as you suggested earlier and it appeared to work so I'm now moving everything over to the new docker so I can sanitise the old docker and its directories. Judging by all the recent tickets raised there is an issue with how docker and network configs get through the upgrade unscathed.
  2. further update, I add another plex docker (plex-1) without amending default configuration, the new docker works albeit not my plex server. I would really like to not have to build my plex server from the ground up all over again.
  3. a follow up to my previous post, the update appears to have run rough shot over my network settings and now plex docker now cant see the world. using the settings recommended on the release notes, my plex docker now has a new ip address. I have port forwarded the new address, my router can see it and says its active. but I still cannot access the docker through any means, Ive used a number of combinations of bridging/bonding. Ive tried ipvlan and macvlan. ive tried the docker on host network config, custom, br0. if I try to assign it its original ip address the docker initialisation fails and it disappears. weirdly my tdarr docker works as advertised.
  4. I have upgraded from the previous version, I followed all the guidance in the release notes including deleting and recreating my port forward (fritzbox user) and I cannot regain access to my plex docker. it appears to be running but I'm unable to get in via the gui, ip address, or any of my client devices. reverting to previous version didn't work either. tower-diagnostics-20230903-2056.zip
  5. fritzbox user here, what sort of issues are we talking about? Haven't upgraded yet.
  6. Again forgive my ignorance on this, I am by no means a software developer. I'm still reasonably chuffed that I managed to get it working at all. Its functionally possible on a hardware level. It must 'simply' (see note below) be a software implementation that is lacking in unraid. In my (very limited compared to most) experience, if my windows machines cant do something its usually a pretty safe bet the linux community has an excellent solution to it. I'm doing stuff on my unraid machine that 2 years ago I didn't even know possible. So in this case I'm a little perplexed that the great community developers we have on board haven't jumped on this as a priority, especially in the current climate of hardware availability. NOTE. I'm not naïve enough to think this a simple implementation... I am an idiot though so if any of the community devs would be able to share with me in simple terms the technical hurdles this kind of thing is up against, that would be smashing. I do truly believe this could be a hugely important feature for unraid.
  7. As the title suggests this would be pretty handy. I plan to reduce the power consumption of my server by reducing the number of drives whilst increasing capacity. As I replace drives it would be great if I could hit a button like the mover function and evacuate all the data off a given drive and spread it across the array whilst carrying out parity corrections as I reduce the number of drives . I have used apps to do this in the past however due to either a configuration issue or likely my own stupidity I have ended up with data loss... which manifested itself as approximately 3000 very random missing episodes in my plex media library which took months to rectify.
  8. With Nvidia unlocking GPU's for multiple streams/sessions and officially supporting GPU passthrough in the latest round of driver releases would it be possible to include paravirtualization in the same fashion that Hyper-v allows so that multiple VM's can share resources on a single card. I will not pretend to understand the intricacies of how this works so forgive me if there is already a solution to this, but its is absolutely a game changer in my home allowing me to run hugely powerful windows 10 instances through parsec on pi thin clients all over the house without the huge expense of multiple GPU's. the flip side is its consuming a lot of resource from my windows 10 workstation that I would rather hand over to a dedicated GPU in my Unraid server.