jsiemon

Members
  • Posts

    23
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jsiemon's Achievements

Noob

Noob (1/14)

5

Reputation

  1. I’m also having the same issues with the WebGUI crashing on startup. I’m now running the docker TailScale without issue. WebGUI is reachable, all shares can be accessed as can all docker containers so it doesn’t appear to be a server or network setup issue.
  2. This seems to have been resolved with rc7-8, and the final release, but I'm still not entirely sure of the root cause. Nevertheless we can close this for now.
  3. As you suggested I tried using a PC and I was unable to reproduce the issue, just as you had found. So that seems to confirm that it is MacOS-only issue, which got me to thinking. I had openZFS installed on the Mac, but not on the Windows 10 PC. Not sure why that would matter but on a hunch I uninstalled openZFS from the Mac. Since doing so, I have not been able to reproduce the issue on the Mac. It's only been a few hours, but I have tried hard recreate the issue without success. However, I've been down this road before, only to have the issue revert. I will continue to test but so far it seems to be working, and I think we have confirmed it is a Mac only issue so that narrows the focus.
  4. After upgrading to RC7 I was initially hopeful that this issue was resolved. However after using RC7 for the past couple of days, the issue persists and I have once again reverted back to XFS for the cache pools while the array remains ZFS. No issues with this configuration.
  5. Yes. All SMB. What is very puzzling is that as long as I'm using XFS formatted SSD for mover_cache, I can save and move MS Office files repeatedly without any issues. I tried looking at the TrueNAS/FreeNAS forums to see if I can find any clues, and there are allusions to SMB attributes and MacOS with ZFS NAS creating issues. Especially MS Office. But I could find nothing definitive.
  6. the Share name is /Users and the full path to save to is /Users/john/Documentsalias/Retirement Planning/ Also, I tried removing the 'space" in Retirement Planning (Retirement_Planning), but that didn't seem to make difference.
  7. tower-diagnostics-20230521-1435.zip I’ve been running 12.RC5 and now 12.RC6 as upgraded from 11.5 and I’m experiencing a problem with Mover. For 11.5 I had been running a dedicated mover_cache using a single SSD formatted as XFS. With the move to 12.RC5 I decided to reformat the mover_cache as ZFS. I also reformatted my array drives from XFS to ZFS. Now using 12.RC5/12.RC6 when I save a file, such as an Excel Spreadheet, it is appropriately saved on the ZFS formatted pool mover_cache. When I invoke Mover, the file is then moved to the appropriate share on the ZFS Array Drive as it should. No change in behavior from 11.5 using all XFS drives. However if I try to open the file after it has been moved to the Array ZFS drive it is now seen as Read Only. If I open it while it is on the SSD ZFS mover_cache I can open it without issue, full r/w privileges. Things that work most of the time, at least as temporary fixes, are Stop/Start the array, or reformat the mover_cache as ZFS or XFS, or Reboot the server, then the file can be opened with full r/w privileges. Using an XFS SSD mover_cache is a permanent fix, but not relevant. In this configuration, there is no issue but it kind of defeats my hope of mirroring the cache drives in the future. The only other change I made was to increase the memory available to the ZFS filesystem from 1/8 to ¼. I have 32G of ECC ram and it mostly goes unused so I thought doubling the ZFS ram might be beneficial. Attached is the DIAGNOSTIC immediately after a save that and MOVE resulting in the file being READ ONLY. My client machine is running MacOS 12.6.5. On Unraid, I have tried toggling the SMB MacOS enhancement, auto trim and compression on the mover_cache and ARRAY but it doesn't seem to matter.
  8. JorgeB - Thanks for taking time to respond to my issue. Your response got me to think further about my setup as I was trying to go step-by-step as to what I did. I realized I had created the ZFS mover_cache using the Erase button on the Disk Settings Page. This morning I reverted my XFS mover_cache back to ZFS mover_cache using the FORMAT Button on the Main Page. Now everything seems to work properly after invoking the MOVE. No more Read Only ERRORS on my saved files after being moved to the share. I will continue to monitor this, but for now everything is working as expected. Chalk this up to user error/stupidity, not a bug. Thanks again for your help in working through this.
  9. I’ve been running 12.RC5 and now 12.RC6 as upgraded from 11.5 and I’m experiencing a problem with Mover. For 11.5 I had been running a dedicated mover_cache using a single SSD formatted as XFS. With the move to 12.RC5 I decided to reformat the mover_cache as ZFS. So far so good. I also reformatted my array drives from XFS to ZFS. Now using 12.RC5/12.RC6 when I save a file, such as an Excel Spreadheet, it is appropriately saved on the ZFS formatted mover_cache. When I invoke Mover, the file is then moved to the appropriate share on the ZFS Array Drive as it should. All good so far and no change in behavior from 11.5 all XFS drives. However if I try to open the file after it has been moved to the Array ZFS drive it is now seen as Read Only. If I open it while it is on the SSD ZFS mover_cache I can open it without issue, full r/w privileges. If I move the file to the Array, then Stop/Start the array or Reboot the server, the file can be opened with full r/w privileges. Through a lot of trial and error I seem to have solved the problem by reverting the mover_cache from ZFS to XFS, leaving the array drives as ZFS. In this configuration, there is no issue but it kind of defeats my hope of mirroring the cache drives in the future. I started with a single SSD ZFS for the mover_cache, with intention to migrate my other cache pools(dockers), but I am holding for now. I know nothing about ZFS and perhaps this expected behavior and I may have missed it in the release notes, but it was a surprise to me. The only other change I made was to increase the memory available to the ZFS filesystem from 1/8 to ¼. I have 32G of ECC ram and it mostly goes unused so I thought doubling the ZFS ram might be beneficial. I haven’t seen this reported elsewhere, perhaps because most have not converted the array to ZFS. Any other thoughts/guidance is appreciated.
  10. Just to jump in on this issue, I recently updated to 6.10.3 and today I went to do a manual backup using the plugin but no backup was initiated. I reverted back to 6.10.2 and all is working fine. Before reverting to 6.10.2 I tried using macvlan instead of ipvlan but that made no difference. There is definitely an issue with the plugin and 6.10.3. For now I'll just stay with 6.10.2.
  11. Overall my experience with RC2 was good. The random crashes when Docker using custom br0 seems to have been addressed with ipvlan. However I did revert back to 6.9.2 because of SMB issues with large file copies (volumes drop off, copy is interrupted and file copies are corrupted.). In particular this happened when I tried to copy a multi GB file containing photos from an unassigned drive to the server using an rsync backup script. I don't seem to have these issues with 6.9.2. I recognize that this SMB issue has been reported by others, but I wanted to add another voice as its a fairly big deal and will limit my ability to move 6.10.
  12. Just a quick followup/update on the state of my installation since I last posted 30+ days ago on April 17. Since reverting all of my docker containers back to Host from br0 I have had no crashes in almost 33 days. However, what I find interesting is that my VM's are using br0 without issue. It makes me think it is not a network issue but rather the way Docker is implemented in 6.9.x. I believe ryanhaver in a previous post hypothesized this idea, but my experience seems to support this assertion. FWIW I'm running 6.9.2 and I'm still looking forward to a fix for this issue as not using br0 for docker containers is a stop gap counter measure, not a real solution.
  13. After upgrading from 6.9.1 to 6.9.2 I took a chance and added back br0 with a static IP on one of my Docker Containers. Less than 24 hrs later I have a hard crash. My syslog for the past 2 days is included with what I believe to be the crash is highlighted in yellow.Apr 16 -17 Crash Log.docx
  14. Just a quick follow-up to my previous post. Its been about a week now since I changed my 2 Docker containers from br0 + static ip to Host and so far no crashes. All of my other Docker containers were already on Host or Bridge. I realize that a week isn't a long time, but prior to this change I had 3 KP in the span of a week so this change appears to have made a significant improvement and may point to the root cause of the issue. I'll report back again in another week good or bad. UPDATE: url_redirect.cgi.bmp I spoke too soon. Just had a KP. Here is a screen shot. I know it's not the same as a log, but it is all that I have.