Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

hawihoney

Members
  • Posts

    3,511
  • Joined

  • Last visited

  • Days Won

    7

Report Comments posted by hawihoney

  1. 16 hours ago, itimpi said:

    If you click on the text for the setting you mention to activate the built-in GUI help it does make it clear that it applies to array devices and that pools always default to btrfs,

     

    Why should an average user click on help if the text "Default file system format" is clear enough?

     

    But the problem is: It's labeled wrong, at least misleading. IMHO this label should be changed to "Default file system format (arrays)" or something like this.

     

    And if you consider a change. Why not add a second option "Default file system format (pools)" and preset it with btrfs?

     

    That would it make clear for anyone.

     

  2. 33 minutes ago, itimpi said:

    I think pools default to btrfs but you can change it to whatever file system you want and then do the format.

     

    I double checked but could not find a way to set the file system of the pool. So I trusted the default setting (XFS).

     

    Something must have changed during the years, because my old pool has XFS (see image below).

     

    I suggest to change that wording - the label - to something more correct like "Default filesystem for array devices" !!!

     

    image.thumb.png.8ea84ca3ed68a40457d0a82831fa8a43.png

     

    Time for a third reboot to change that. BTRFS is no option here ... Look at that temperature. The XFS pool is in use. The BTRFS pool is used as backup once a day and not in use otherwise. It's sleeping currently.

     

  3. With these settings in syslinux.cfg the second NVMe was missing completely. After removing this setting this NVMe is still not visible. Looking like an hardware error.

     

    I remember these kinds of errors years ago. I could solve that with Shutdown/Restart instead of Reboot. Next time I will try Shutdown/Restart again.

     

    ***EDIT*** Couldn't wait. Did a complete shutdown and did restart with the Powerknob on the Supermicro case. Bingo. NVMe2 is back again. I must remember this. Never do a reboot on my system, shutdown the system completely instead.

     

  4. 15 hours ago, JorgeB said:

    nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off

     

    The interesting part is that this device was available after reboot for a short period of time. And after that it was gone silently. This combo was running happily for years with previous Unraid versions.

     

    Is this correct?

     

    image.thumb.png.22c7a332e641c34990d1942d76e188da.png

     

    I will reboot then and report.

     

    Thanks.

     

  5. 1 hour ago, T0rqueWr3nch said:

    Is that intentional?

    Yes. I always wanted both NVMe as a 2-device pool. But I don't trust BTRFS and ZFS seems way to complicated to me. So I replicate the first pool device to the second one mounted thru Unassigned Devices via User Scripts. With multi-Array support - whenever it will arrive - I will change that to a 2-device Unraid array.

     

  6. 9 hours ago, dlandon said:

    I am able to reproduce the issue.  It appears when you remote mount a share from a server that is not another Unraid.

     

    In my case these were Unraid Servers running 6.12.9 as well. I did upgrade them all yesterday. All have SMB mounts to each others disks (disk shares) via Unassigned Devices.

     

    They all showed this error. The mounts were there, but showed only the first directory per mount. syslog was full of CIFS VFS errors.

     

    After downgrading all machines to 6.12.8 business is back as usual.

     

  7. On 9/11/2023 at 8:45 PM, CiscoCoreX said:

    Settings > Network Settings > eth0 > Enable Bridging = No

    Docker settings, nSettings > Docker > Host access to custom networks = Enabled

     

    These two are the problem. This workaround was recommended in 6.12.x for users experiencing "MACVLAN" crashes. You don't need them if you are a.) on IPVLAN or b.) didn't experience MACVLAN crashes or c.) you are no user of routers like Fritzbox in Europe.

     

  8. 12 hours ago, CiscoCoreX said:

    Settings > Network Settings > eth0 > Enable Bonding = Yes

    Settings > Network Settings > eth0 > Enable Bridging = No

    Docker settings, nSettings > Docker > Host access to custom networks = Enabled

     

    Ah, did these changes as well on 6.12.4 and have to wait approx. 10 sec most of the time - but not always. Sometimes even Login window needs 10 sec before it appears.

     

  9. 26 minutes ago, ljm42 said:

    update_cron is called by Unraid to enable `mover`, but User Shares are disabled on this server so `mover` is not needed.

     

    No User Shares, no Mover - it worked on 6.11.5, it stopped working with 6.12.4. This is a breaking change for e.g. plugins that rely on update_cron to add own schedules to cron.

     

    I will call update_cron as a User Script within User Scripts plugin to work around this change. No big deal for me, but be prepared for plugins or users that stumble over this change.

     

    BTW, why not call update_cron during system start always, just in case somebody needs it.

     

  10. 25 minutes ago, itimpi said:

    Perhaps many people will not need the workaround anyway if they have a plugin (such as my Parity Check Tuning plugin) that issues this as part of its install process as the update_cron command is not plugin specific - it should pick up all cron jobs that any plugin has waiting to be activated.

     

    I wrote @Squid in his User Scripts thread about our conversation here. Perhaps you both can talk about that (call update-cron during plugin installation).

     

    I only have 4 plugins installed on these machines, only what's really required on these. None of my plugins - except User Scripts - has to set schedules. And I don't want to install a plugin I don't need except for calling update_cron ;-) So I thiink it's up to this plugin to fix that. And if 6.13 will be picky during startup I think it's even more important to address that.

     

  11. 1 hour ago, bonienl said:

    Apply the changes AFTER updating

     

    @ich777 asked me to add the following to my question above:

     

    I'm running three Unraid server: One Unraid server running on bare metal and two Unraid server as VMs on that bare metal server. These two VMs act as DAS (Direct Attached Storage, access thru SMB) only - just the Array. No Docker Containers, no VMs.

     

    His idea is that bridging needs to be enabled on the Unraid VMs - it is currently already.

     

    Currently:

     

    Unraid Bare metal: Bonding=no, Bridging=yes, br0 member=eth0, VLANs=no

    Unraid VMs: Bonding=yes (active_backup(1)), Bridging=yes, VLANs=no

     

    Docker Bare metal: MACVLAN, Host access=no

    Docker on VMs: Disabled

     

    Is this ok? It's running happily since years, currently on Unraid 6.11.5 with Fritzbox DSL IPv4-only.

     

    • Like 1
  12. Quote

    Settings -> Network Settings -> eth0 -> Enable Bridging = No

    Settings -> Docker -> Host access to custom networks = Enabled

     

    Is it recommended to make these changes in 6.11.5 before the update from 6.11.5 to 6.12.4-rc18? Or should I update to 6.12.4-rc18 first and apply these changes afterwards?

     

×
×
  • Create New...