glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. Can i host this folder on ZFS ? Or does it have to be XFS/BTRFS ? (all my docker persistent data / appdata is already on ZFS since years)
  2. Some fail others run as they just start creating directories and initialise data under the mountpoint (.mnt/disks/virtuals) of the not mounted zfs volume/datasets if these dont exists. So i have to stop them all and clean up the mess they create. Now i undrstand i think i know how i am gong to do upgrades in the future. I will do the builds on my backup unraid server, which runs no docker containers and then just copy the build files to my primary when all works fine. Seems a better approach anyway
  3. Yeah now i understand. Though i could build for the target version and include zfs etc in one go. The downside is that when i upgrade to target version i loose zfs and all my dockers are there. So i have to disable autostart one by one for over 30 dockers before the upgrade to a vesion without zfs , so i am then able to start the array to access the kernel helper docker afterwards and build for zfs. They afterwards re-enable autostart again one by one. But i update not so often so not a big issue. At least i understand why now
  4. Hi @ich777, I have been using your wonderfull kernel helper docker now for a while (mainly to include the gnif/vedor-reset patch for my gpu), but i have a funny annoyance with including zfs. Every time when i upgrade , the first build does not have zfs working. So it boots into the new build os , but the zfs commands are not available. Then as a second run i build again , after cleaning the output dir etc and without changing anything in the docker settings, just rerun. Put that version in place and then after the boot into that second build zfs works. Is that expacted behavior that zfs can not be active when building with zfs included ? Or do i have to be on the target release to build the target release with options like zfs etc ? I was bittten today again by it when updating from 6.9.0 to 6.9.1 and my dockers (30+) autostart but on non mounted zfs. So i have to cleanup all the mess they create in the wrong place (just in directories under the non mouted mountpoint) instead of in the zfs where they all have their persistant storage locations .
  5. Hi ghost82, What is again the advantage of using these instead of the standard unraid versions ? Sorry if this was posted / explained before but cant seem to remember. Tnx for all your great work btw
  6. Upgraded my main production and backup unraid servers (using the kernel build docker from the ich777 master to include amd gpu reset bug patch and zfs) and besides the spindown issue that still did not go away with the upgrade from rc2 to final all went smooth. After breaking my head why even with all dockers and vm's and shares down it still would not spindown my array drives , i finaly found it was a custom own build script i had running in the user scripts plugins that did smarctl's every 5 mins to collect drive standby statistics for my splunk dashboard. Once i disabled that all was smooth sailing. Replaced the smartctl calls to hddparm calls which give me the same data but apparently do not disturb the standby behavior of unraid. So if you experience this issue look for "anything" that can fire of smartctl's. (graphana/telegraf and the like have already been pointed at in other recet posts)
  7. so minimum 2 weeks then โœŒ๏ธ๐Ÿ˜œ
  8. Dear and wonderfull devs. When can we expect the RC3 ? As since i moved to RC2 from 6.8 , none of my (SATA) drives (via LSI cards) are going into standby anymore whatever i do. Aparently a known bug so i was waiting patiently. But i am building a bigger and bigger power bill as we speak since a while now. Cant move back as this release is the very first in the history of the galaxy that finaly properly fixed my AMD reset bug. So am eagerly awaiting the standby fix in RC3.
  9. Worked great for a year but suddenly get these messages on top of my gallery pages: Deprecated: Array and string offset access syntax with curly braces is deprecated in /config/www/gallery/include/functions_cookie.inc.php on line 72 Notice: Trying to access array offset on value of type null in /config/www/gallery/include/functions_category.inc.php on line 140 Notice: Trying to access array offset on value of type null in /config/www/gallery/include/functions_category.inc.php on line 141 UPDATE : To remove the depreciation message (that prevented logins from my IOS piwigo client app) I modified the following line in <config>/www/gallery/local/config/config.inc.php from : $conf['show_php_errors'] = E_ALL; new: $conf['show_php_errors'] = E_ALL & ~E_DEPRECATED; The other messages , no idea but they do not harm my ios app so ignoring for now. Only show up on main landing page of gallery when using web browser. UPDATE 2 : Upgraded to 11.3.0 from within piwigo admin page and resolved he remaining errors . So closed.
  10. I have it finaly working after also almost guving up. Build a latest 6.9.0 rc2 kernel with the latest gnif/vendor-reset patch and now zero issues anymore with Macos or Windows VM and passthru. Can restart the VM at will , even force kill. check this thread https://forums.unraid.net/topic/92865-support-ich777-nvidiadvbzfsiscsimft-kernel-helperbuilder-docker/
  11. Yeah i installed RC2 2 days ago and since then zero spindowns ( i am monitoring it with splunk so was an easy spot). All SATA btw. Apparently fixed in RC3 so we wil wait.
  12. I am so sorry buddy i did not react sooner, but had a bit of a bad personal time so was off the radar for a while. And again , thanks for all the help in the past.
  13. its the https://github.com/gnif/vendor-reset but i just include it via the kernel helper docker.
  14. Looking at the git repo it seems that branch was merged with master 16 days ago, so probably i could have used the master. edit: building it now....
  15. WOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA Unbelievable, but finally success. I had given up on having a working AMD gpu reset bug patch on my main system in my OSX Mojave VM with 5700XT passed. Tried every patch on the planet , which either crashed my whole system , or just did not work and required reboots on every VM restart, incl more recently using builds from / with the help of the great ich777 and giganode, but even their help could not beat my stubborn system where others had success using the same builds. So i forgot about it untill today. I just tried to give the Unraid-Kernel-Helper a go and build a kernel myself with just zfs and the the audio-reset branch of the vendor-reset patch. And it is working like a rock. I can do whatever i want with my OSX Mojave VM , even force shutdowns. Zero issue starting it up again. Amazing. Thanks for all the hard work guys !!!!!!
  16. Fingers crossed as also getting realy tired when i have to reboot my whole server if i have a small change on my OSX VM with an 5700xt in an otherwise perfect unraid system.
  17. The world is full with people who make mistakes. Every day 24x7 and it will never stop. True progress and understanding comes from those who honestly and without holding back in public appologise for their f-up and i can only applaud. And its is never ever too late to appologise as long as its honest and fully from the hart. Hope the damage done is not too big and we can move on as one big happy hacking family. I so love unraid and all its amazing company and community developers who put time , blood and sweat in making this product great (again) for us humble end users. Thank you all and please throw this whole ffing year in /dev/null where it belongs and lete make it an amazing 2021 to compensate. And if we compare what happend here to the world wide pandemic and all its pain and suffering we should be able to step over this comparitively little thing right ?
  18. Hi@ghost82, Sorry for the late reply. But bussy lately including finishing a new 14 core skylake x build that replaced one of my main workstations. Indeed i have multiple networks .. 1 x1GB and 1x10GB both passed through cards. But you opencore device config trick worked like a charm . The xml only method not and likely as you suggested because of multiple networks. Although i remember having it working in the past when one of the 2 was a virtual network and the other a passes though card. But the virtual was never stable enough so moved to real metal Tnx again.
  19. glennv

    Lifetime of UPS.

    You should not have to replace your UPS but just the battery . Typicaly the ups indicates when its time to do so. And then they are as good as new again.
  20. Nevermind. Found it myself. Its the default port for RPC service which aparently is 8088 I uncommented this in the config file and am good now. Tnx # Bind address to use for the RPC service for backup and restore. bind-address = "127.0.0.1:58083"
  21. Hi @testdasi, Playng with your Grafana Unraid Stack, but i am hitting a problem where is seems to use a port that is not advertised as beeing used. Its 8088 and used by influxdb next to the configured 8086. Now that 8088 i need as in use on my host. Where is that configured so i can change that ? I looked in influxdb config but dont see it there. โ”Œโ”€[14:32:24]โ”€[root@TACH-UNRAID]:[/mnt/disks/virtuals/appdata/Grafana-Unraid-Stack/influxdb] โ””โ”€[!139]โ”€:)โ”€> lsof -i :8088 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME influxd 47186 root 3u IPv4 2960037 0t0 TCP TACH-UNRAID:8088 (LISTEN) โ”Œโ”€[14:45:54]โ”€[root@TACH-UNRAID]:[/mnt/disks/virtuals/appdata/Grafana-Unraid-Stack/influxdb] โ””โ”€[!140]โ”€:)โ”€> lsof -i :8086 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME influxd 47186 root 16u IPv6 2964767 0t0 TCP *:8086 (LISTEN) influxd 47186 root 33u IPv6 3108085 0t0 TCP TACH-UNRAID:8086->TACH-UNRAID:36474 (ESTABLISHED) influxd 47186 root 34u IPv6 3090408 0t0 TCP TACH-UNRAID:8086->TACH-UNRAID:36476 (ESTABLISHED) grafana-s 47408 root 19u IPv4 3101218 0t0 TCP TACH-UNRAID:36474->TACH-UNRAID:8086 (ESTABLISHED) grafana-s 47408 root 20u IPv4 3092129 0t0 TCP TACH-UNRAID:36476->TACH-UNRAID:8086 (ESTABLISHED) Tnx
  22. @ghost82 Stole some downtime as wanted to test it fast. Worked like a charm . You rock dude !!. Now this %^%$ piece of software finally also runs fine after my en0 is seen as internal. (actually the software is amazing, but its apples crap again) Damn apple. But they are not unique. In the past i have seen even on enterprice level software licenses been tied to an internal ethernet card. Then you replaced the card and license broke.
  23. Tnx but this i already have and is aparently in some cases not enough. Will try ghost82 method , which looks promising.