Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Interstellar

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just before I open a bug topic, can people using Safari check that when you open a sub-window (e.g. the log) that it asks you to login again? Safari: I get the login window on every pop-up (E.g. The 'Log Summary' button) which when you login just takes you to the home 'Dashboard' page again... Chrome: Works I would assume as intended (all works as per pre-6.8) I use Safari 99.999% of the time, so this is a minor annoyance. (This happens on both my iMac and newly installed MBP so it isn't a cache issue).
  2. Updated this morning. System totally locked up after about 10 mins. Nothing in the console or flash drive syslog. Forced a reboot - everything ok 8 hours later so hoping it is a one off!
  3. Update. Managed to achieve isolating Docker/VMs in the end thanks to the following threads: Essentially your second/third/forth NIC needs to be set as follows in UnRAID's network settings: UnRAID Network Settings Enable bonding: No Enable bridging: No IPv4 address assignment: None UnRAID Docker Settings (Example) - Use your own IP ranges. Subnet: Gateway: DHCP pool: (32 hosts) Then in each of your docker containers you need to pick eth1 (or br1 if you picked 'Enable Bridging' = Yes above). Then add a fixed IP if you want, otherwise it'll be set an IP based on the range set above. Now all my Dockers can communicate with each other, the outside world and UnRAID (and vice-versa)!
  4. Still struggling with this if anyone has any ideas? Cheers.
  5. Any ideas why I'm now getting this error in the preview window? (Only change I've made is upgrading from 6.7.1 to 6.7.2) tput: unknown terminal "tmux-256color" Thoughts? Cheers.
  6. Updated from 6.6.5 (just shy of 90 days uptime...) All seems to have gone OK. Updated all my plugins and dockers before hand and again afterwards for those plugins that required 6.7.0. I'll be staying at this version for another 90 days 😂
  7. As a side note I hate monochrome icons. When the sidebar icons changed from colour to monochrome in OSX it takes longer for me to work out which one I want. Style over functionality!!!
  8. Im not up to date with the changes that cause your issues, but if it’s a change for newer hardware then you might be SOL. Cant keep putting effort in supporting 10+ year old platforms.
  9. Indeed. In a time where most upgrades are becoming paid every year this is welcome. Almost getting to the point where I'll buy another pro license, just incase!
  10. Well certainly a welcome improvement! Just need to sort out the way we login and it’s be a perfect package for me. It’s been running absolutlely perfectly for over a year now. Bit of a pain loosing internet whilst the machine restarts but other than that 99.9% uptime. Other than the BIOS update I did preventing my NIC from working in the server anymore it’s been flawless. Up to date quad NIC bought so should resolve that one tomorrow..!
  11. Updated my X10SLL-F from a 2014 BIOS to a 2018 one. It made my Intel Pro 1000PT Quad NIC stop working! (New BIOS seems to have fixed a bug where previously the bottom slot would negotiate pci-e v1a but now negotiates v2, and the 1000PT doesn’t work with v2! New NIC inbound! Other than that the flash went fine and everything working...
  12. Absolutely fine here. Even seems to start faster than before (not timed it, just feels like the time I hit restart to the time pfsense comes back up is way shorter than it used to be!)
  13. Upgraded from rc1 here. All good and a speedy boot from what I can tell. No log errors and everything working.
  14. All good and speedy here! Passthrough VMs started much quicker than usual from cold. SuperMicro X10SLL-F, 1230v3, RX560, Intel quad NIC
  15. Nope. I think I just pulled the drive and let parity rebuild as it was faster. Although I have a vague recollection that I re-formatted the drives so I could mount them then filled them with a massive/dev/zero file (at full speed!), then did the /dev/md* command to clear the first 500M, then pulled the drive and forced the parity to remain valid. Ended up with a handful of parity errors after the 11 hour check. Not ideal but at least I had a 99.999% valid parity whilst it checked it. System works perfectly otherwise and I haven’t tried it again on newer versions.