jsiemon

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by jsiemon

  1. I’m also having the same issues with the WebGUI crashing on startup. I’m now running the docker TailScale without issue. WebGUI is reachable, all shares can be accessed as can all docker containers so it doesn’t appear to be a server or network setup issue.
  2. This seems to have been resolved with rc7-8, and the final release, but I'm still not entirely sure of the root cause. Nevertheless we can close this for now.
  3. As you suggested I tried using a PC and I was unable to reproduce the issue, just as you had found. So that seems to confirm that it is MacOS-only issue, which got me to thinking. I had openZFS installed on the Mac, but not on the Windows 10 PC. Not sure why that would matter but on a hunch I uninstalled openZFS from the Mac. Since doing so, I have not been able to reproduce the issue on the Mac. It's only been a few hours, but I have tried hard recreate the issue without success. However, I've been down this road before, only to have the issue revert. I will continue to test but so far it seems to be working, and I think we have confirmed it is a Mac only issue so that narrows the focus.
  4. After upgrading to RC7 I was initially hopeful that this issue was resolved. However after using RC7 for the past couple of days, the issue persists and I have once again reverted back to XFS for the cache pools while the array remains ZFS. No issues with this configuration.
  5. Yes. All SMB. What is very puzzling is that as long as I'm using XFS formatted SSD for mover_cache, I can save and move MS Office files repeatedly without any issues. I tried looking at the TrueNAS/FreeNAS forums to see if I can find any clues, and there are allusions to SMB attributes and MacOS with ZFS NAS creating issues. Especially MS Office. But I could find nothing definitive.
  6. the Share name is /Users and the full path to save to is /Users/john/Documentsalias/Retirement Planning/ Also, I tried removing the 'space" in Retirement Planning (Retirement_Planning), but that didn't seem to make difference.
  7. tower-diagnostics-20230521-1435.zip I’ve been running 12.RC5 and now 12.RC6 as upgraded from 11.5 and I’m experiencing a problem with Mover. For 11.5 I had been running a dedicated mover_cache using a single SSD formatted as XFS. With the move to 12.RC5 I decided to reformat the mover_cache as ZFS. I also reformatted my array drives from XFS to ZFS. Now using 12.RC5/12.RC6 when I save a file, such as an Excel Spreadheet, it is appropriately saved on the ZFS formatted pool mover_cache. When I invoke Mover, the file is then moved to the appropriate share on the ZFS Array Drive as it should. No change in behavior from 11.5 using all XFS drives. However if I try to open the file after it has been moved to the Array ZFS drive it is now seen as Read Only. If I open it while it is on the SSD ZFS mover_cache I can open it without issue, full r/w privileges. Things that work most of the time, at least as temporary fixes, are Stop/Start the array, or reformat the mover_cache as ZFS or XFS, or Reboot the server, then the file can be opened with full r/w privileges. Using an XFS SSD mover_cache is a permanent fix, but not relevant. In this configuration, there is no issue but it kind of defeats my hope of mirroring the cache drives in the future. The only other change I made was to increase the memory available to the ZFS filesystem from 1/8 to ¼. I have 32G of ECC ram and it mostly goes unused so I thought doubling the ZFS ram might be beneficial. Attached is the DIAGNOSTIC immediately after a save that and MOVE resulting in the file being READ ONLY. My client machine is running MacOS 12.6.5. On Unraid, I have tried toggling the SMB MacOS enhancement, auto trim and compression on the mover_cache and ARRAY but it doesn't seem to matter.
  8. JorgeB - Thanks for taking time to respond to my issue. Your response got me to think further about my setup as I was trying to go step-by-step as to what I did. I realized I had created the ZFS mover_cache using the Erase button on the Disk Settings Page. This morning I reverted my XFS mover_cache back to ZFS mover_cache using the FORMAT Button on the Main Page. Now everything seems to work properly after invoking the MOVE. No more Read Only ERRORS on my saved files after being moved to the share. I will continue to monitor this, but for now everything is working as expected. Chalk this up to user error/stupidity, not a bug. Thanks again for your help in working through this.
  9. I’ve been running 12.RC5 and now 12.RC6 as upgraded from 11.5 and I’m experiencing a problem with Mover. For 11.5 I had been running a dedicated mover_cache using a single SSD formatted as XFS. With the move to 12.RC5 I decided to reformat the mover_cache as ZFS. So far so good. I also reformatted my array drives from XFS to ZFS. Now using 12.RC5/12.RC6 when I save a file, such as an Excel Spreadheet, it is appropriately saved on the ZFS formatted mover_cache. When I invoke Mover, the file is then moved to the appropriate share on the ZFS Array Drive as it should. All good so far and no change in behavior from 11.5 all XFS drives. However if I try to open the file after it has been moved to the Array ZFS drive it is now seen as Read Only. If I open it while it is on the SSD ZFS mover_cache I can open it without issue, full r/w privileges. If I move the file to the Array, then Stop/Start the array or Reboot the server, the file can be opened with full r/w privileges. Through a lot of trial and error I seem to have solved the problem by reverting the mover_cache from ZFS to XFS, leaving the array drives as ZFS. In this configuration, there is no issue but it kind of defeats my hope of mirroring the cache drives in the future. I started with a single SSD ZFS for the mover_cache, with intention to migrate my other cache pools(dockers), but I am holding for now. I know nothing about ZFS and perhaps this expected behavior and I may have missed it in the release notes, but it was a surprise to me. The only other change I made was to increase the memory available to the ZFS filesystem from 1/8 to ¼. I have 32G of ECC ram and it mostly goes unused so I thought doubling the ZFS ram might be beneficial. I haven’t seen this reported elsewhere, perhaps because most have not converted the array to ZFS. Any other thoughts/guidance is appreciated.
  10. Just to jump in on this issue, I recently updated to 6.10.3 and today I went to do a manual backup using the plugin but no backup was initiated. I reverted back to 6.10.2 and all is working fine. Before reverting to 6.10.2 I tried using macvlan instead of ipvlan but that made no difference. There is definitely an issue with the plugin and 6.10.3. For now I'll just stay with 6.10.2.
  11. Overall my experience with RC2 was good. The random crashes when Docker using custom br0 seems to have been addressed with ipvlan. However I did revert back to 6.9.2 because of SMB issues with large file copies (volumes drop off, copy is interrupted and file copies are corrupted.). In particular this happened when I tried to copy a multi GB file containing photos from an unassigned drive to the server using an rsync backup script. I don't seem to have these issues with 6.9.2. I recognize that this SMB issue has been reported by others, but I wanted to add another voice as its a fairly big deal and will limit my ability to move 6.10.
  12. Just a quick followup/update on the state of my installation since I last posted 30+ days ago on April 17. Since reverting all of my docker containers back to Host from br0 I have had no crashes in almost 33 days. However, what I find interesting is that my VM's are using br0 without issue. It makes me think it is not a network issue but rather the way Docker is implemented in 6.9.x. I believe ryanhaver in a previous post hypothesized this idea, but my experience seems to support this assertion. FWIW I'm running 6.9.2 and I'm still looking forward to a fix for this issue as not using br0 for docker containers is a stop gap counter measure, not a real solution.
  13. After upgrading from 6.9.1 to 6.9.2 I took a chance and added back br0 with a static IP on one of my Docker Containers. Less than 24 hrs later I have a hard crash. My syslog for the past 2 days is included with what I believe to be the crash is highlighted in yellow.Apr 16 -17 Crash Log.docx
  14. Just a quick follow-up to my previous post. Its been about a week now since I changed my 2 Docker containers from br0 + static ip to Host and so far no crashes. All of my other Docker containers were already on Host or Bridge. I realize that a week isn't a long time, but prior to this change I had 3 KP in the span of a week so this change appears to have made a significant improvement and may point to the root cause of the issue. I'll report back again in another week good or bad. UPDATE: url_redirect.cgi.bmp I spoke too soon. Just had a KP. Here is a screen shot. I know it's not the same as a log, but it is all that I have.
  15. Count me as another who is now experiencing hard crashes on my server and based on my reading here i suspect it is my Dockers on br0 creating problems. I've been running UnRaid for a long time (more than 15 years??) and can't recall ever having had a server crash. However since upgrading to 6.9.1 I've had 3 hard crashes in matter of week that require forcibly powering-off. Prior to upgrading to 6.9..1 I was running 6.8.2 using Dockers and br0 with static IPs without any issue but now this bug seems to have gotten me as well. Today I've moved all of my dockers to Bridge and Host where appropriate and will see if this resolves my crashing issues. I'll report back here either way.
  16. SOLVED: The issue is not related to the Docker Template or UnRaid but is unique to OpenHAB. I have a high core count server(32 cores) and Jetty chokes. The solution is to either pin the docker to no more than 4 cores or update the OpenHAB conf/services/runtime.cfg as follows. For future reference: in Openhab 3.0 org.eclipse.smarthome.webclient has changed to org.openhab.webclient. So it becomes: org.eclipse.smarthome.webclient:minThreadsShared=20 org.eclipse.smarthome.webclient:maxThreadsShared=40 org.eclipse.smarthome.webclient:minThreadsCustom=15 org.eclipse.smarthome.webclient:maxThreadsCustom=40 After making these changes OH3 runs without apparent issue. _______________ I've been running OpenHAB 2.5.x for almost 2 years now using the Docker template without much problem. I have also spun up another Docker to evaluate OpenHAB 3.0 and it runs pretty well overall, but I have 1 major problem with OH 3.0 that I do not have OH2.5x, I am unable to connected to OH3.0 from the myopenhab.org website. I can push Notifications from the site but I cannot access my local OH3.0 WebUI from home.myopenhab.org. For 2.5.x I have no issue. Every time I try to access OH3 this way I get an error openHAB connection error: HttpClient@4016d1a5{FAILED} is stopped Any ideas how to fix this? is this a permissions issue or some incompatibility with UnRaid Docker (my suspicion). I am running UnRaid 6.8.2 and have tried 6.8.3 and 6.9.rc2 but it makes no difference. I have also removed and recreated the docker image and docker containers, without a better result. If I switch over to OH2 I immediately am able to access the local OH2 WebUI. I also tried installing OH3 on a MacOS VM hosted on UnRaid using br0 just like my dockers and that worked like a charm without issue, so I know OH3 can run on UnRaid network setup. If it was only UI access I wouldn't be too concerned, but any traffic such as OWNTracks that uses the home.myopenhab.org interface is not able to communicate with my OH3 installation and therefore I have no way to update Presence detection that involves the GPS Tracker/OWNTracks. Any ideas as what might be the problem or what to try next? Any arguments I should be using during the install?
  17. Just updated my unRaid from the Original Intel MB D865GLCLK and Celeron, which were still working fine BTW after 6+ yrs. I was toying with adding another 4 port Promise SATAII and increasing the RAM to 2GB from the 512MB I was using, but for a very few bucks more I got much more expandability and better performance. New MB is a Foxconn G41MXE and Celeron E 3400 dual core. 4GB DDR3 Ram 8 2GB SATA + 1 Hot Swap HDD on 2 Promise PCI SATAII cards + MB SATA connectors. Working fine so far. LAN recognized no issue. This MB has 2 PCI 1 PCIE 1x and 1 PCIE 16x. 4 SATAII connectors on the MB. Lot's of potential expansion for pretty low investment. Won't set any speed record, but that's not why/how I use unRaid. Using unRaid 4.7 This is just FYI since my searches turned up nothing regarding unRaid compatibility for this board. UPDATE 4/15/2012: this weekend a I received a Highpoint RocketRaid 2300 pcie x1 I purchased on ebay. I have been unable to get the G41MXE to acknowledge the presence of the card. This has nothing to do with UnRaid as the card should post on boot prior to loading the OS and it doesn't. I have Asus P5WDH I use for multi OS and video encoding that I was able to get the RR2300 recognized, and was able to flash the bios to latest build dl from HighPoint (V2.5). Still no go with G41MXE. I tried a number of settings and combinations in both RR2300 bios as well as the G41MXE without any luck. At this point I would say that the RR2300 and G41MXE are incompatible.
  18. I agree that restoring a few files from within TimeMachine is not a problem. I able to do that, no issue. What I am curious to know has anyone tried accessing their Time Capsule from the Install DVD and/or restoring an entire volume.
  19. I'm trying to do a restore from the DVD, to restore the entire Volume. I have tried Guest w/o a pw and no luck. I don't know if I did something wrong when I setup or this is a bug, but either way it's not good. Also, I have installed the patch/update for 10.7.3. Just curious if anyone has had any success in accessing their Unraid TimeCapsule for a full restore from their DVD/ Intall Disk. Might help narrow this down to a system issue ( my probelm) or a software problem ( our problem). A backup won't do us any good if we can't retreive it.
  20. I'm having trouble doing a complete restore. It keeps asking for a password for the UnRaid server. In my case /Tower. I am not using password protection on my UnRaid or the Time Capsule(Guest). Any ideas on what I may be missing?