ich777 Posted October 18, 2020 Author Share Posted October 18, 2020 1 hour ago, hawihoney said: I do have the GPU UUID in my Plex docker already. I took this UUID from the LSIO Nvidia plugin that I'm using currently. I asked my question because you mentioned in this thread somewhere, that the GPU UUID might change, when switching over to your approach. Just curious. Normally the UUID doesn't change since it's a UUID of the GPU itself. I always recommend to use the Unraid-Kernel-Helper-Plugin since it's the plugin created for the images you build for and it gives a little more information about it. EDIT: Another user said that not me. Quote Link to comment
188pilas Posted October 20, 2020 Share Posted October 20, 2020 @ich777 thanks for this helpful tool...have you tried using ZFS ZVOL and attaching that to iSCSI target? I would like to test out the performance compared to an SMB for gaming library. Quote Link to comment
ich777 Posted October 20, 2020 Author Share Posted October 20, 2020 23 minutes ago, 188pilas said: @ich777 thanks for this helpful tool...have you tried using ZFS ZVOL and attaching that to iSCSI target? I would like to test out the performance compared to an SMB for gaming library. No because I don't use ZFS personally... But you could easily do that. I think I got a build somewhere lying around (6.9.0beta30) with ZFS and iSCSI if you haven't built one yourself feel free to contact me. Quote Link to comment
188pilas Posted October 20, 2020 Share Posted October 20, 2020 5 minutes ago, ich777 said: No because I don't use ZFS personally... But you could easily do that. I think I got a build somewhere lying around (6.9.0beta30) with ZFS and iSCSI if you haven't built one yourself feel free to contact me. I am going to build one with Unraid 6.8.3 with ZFS 0.8.4...I also have Mellanox Technologies MT26448 ConnectX EN 10GigE and will try to build with Melanox Firmware Tools. Going to backup my pool and boot drive...I do not have a cache drive and I read that the builds are /mnt/cache/appdata/kernel/output-VERSION by default...can we manually change that? or do I need to put a cache drive temporarily? Quote Link to comment
ich777 Posted October 20, 2020 Author Share Posted October 20, 2020 31 minutes ago, 188pilas said: I am going to build one with Unraid 6.8.3 with ZFS 0.8.4...I also have Mellanox Technologies MT26448 ConnectX EN 10GigE and will try to build with Melanox Firmware Tools. Going to backup my pool and boot drive...I do not have a cache drive and I read that the builds are /mnt/cache/appdata/kernel/output-VERSION by default...can we manually change that? or do I need to put a cache drive temporarily? You can also build it with 0.8.5 (this is the latest version and works also with Unraid 6.8.3). 31 minutes ago, 188pilas said: I also have Mellanox Technologies MT26448 ConnectX EN 10GigE and will try to build with Melanox Firmware Tools. The build with this tools takes a little longer but it should be no problem... (I also got a ConnectX2 with one SFP port and now I can flash the ROM and sucessfully removed the BIOS from the card to speed up boot times and also red the temperature with my Unraid-Kernel-Helper Plugin ). Forgot that if you build for 6.8.3 that the build of MFT Tools is not implemented. 31 minutes ago, 188pilas said: I do not have a cache drive and I read that the builds are /mnt/cache/appdata/kernel/output-VERSION by default...can we manually change that? CA should detect that and change it automatically to something like /mnt/disk2/appdata/kernel (normally where your appdata files are), but you also can do that manually - please be carefull that Unraid doesn't split the files over multiple disks this could cause complications. 31 minutes ago, 188pilas said: or do I need to put a cache drive temporarily? Of course not. EDIT: If you got any problems feel free to contact me again. Quote Link to comment
ich777 Posted October 20, 2020 Author Share Posted October 20, 2020 30 minutes ago, 188pilas said: I also have Mellanox Technologies MT26448 ConnectX EN 10GigE and will try to build with Melanox Firmware Tools. Forgot that if you build for 6.8.3 that the build of MFT Tools is not implemented only for version 6.9.0betas Quote Link to comment
188pilas Posted October 20, 2020 Share Posted October 20, 2020 11 minutes ago, ich777 said: Forgot that if you build for 6.8.3 that the build of MFT Tools is not implemented only for version 6.9.0betas ah right...I saw that on the first post. Thanks 1 Quote Link to comment
hawihoney Posted October 20, 2020 Share Posted October 20, 2020 Last question before I switch. In the Nvidia plugin description I found: Quote This plugin from LinuxServer.io allows you to easily install a modified Unraid version with Nvidia drivers compiled and the docker system modified to use an nvidia container runtime, meaning you can use your GPU in any container you wish. Is the docker system modified too if using your prepared Nvidia build? Quote Link to comment
ich777 Posted October 20, 2020 Author Share Posted October 20, 2020 10 minutes ago, hawihoney said: Is the docker system modified too if using your prepared Nvidia build? It's basically the same the only difference is that I made it open source (but I also understand why linuxserver.io is closed source). A few things are different, that's why I recommend to use my Kernel-Helper-Plugin instead of the linuxserver.io plugin and of course I made it mainly for acceleration in Docker containers and in my case to combine it with the DitigtalDevices DVB drivers and over time some things where added (MFT Tools, iSCSI,...). Quote Link to comment
hawihoney Posted October 21, 2020 Share Posted October 21, 2020 Switched from Unraid NVIDIA to your precompiled NVIDIA kernel 6.8.3 this morning - and gained a near heart attack: Booting the server took three times as long as before (LSIO Unraid NVIDIA). During the long boot process I took some screenshots from error messages that passed by during the boot process (see below). In a rush I took the Unraid USB stick out, copied the 8 old Unraid files over and put the USB stick back in. In the meantime Unraid came up - with an empty boot folder. It was when I realized that extremly long boot process. So, I pulled the USB stick again, copied the new 8 files from you over, and did restart again. This time I gave a long time to boot before checking. Seems that the system came up now and is doing a parity check. Early in the morning, running the steps to the basements that many times, in my age. Puh. Please add a note about the long boot process. It may help people like me to cool down. Quote Link to comment
ich777 Posted October 21, 2020 Author Share Posted October 21, 2020 9 minutes ago, hawihoney said: Booting the server took three times as long as before (LSIO Unraid NVIDIA). Please don't blame it on this build, the boot process is not longer than on the LSIO. I got several servers running some with this builds and some stock and the boot times are not that different. Can it be that your USB boot flash drive is currently dying since this messages try to tell you that. Actually the message where the putty windows is in the way would be nice to read why it can't extract bzroot... EDIT: Never pull a USB drive of a running server!!! Quote Link to comment
hawihoney Posted October 21, 2020 Share Posted October 21, 2020 2 minutes ago, ich777 said: the boot process is not longer than on the LSIO Did CHKDSK, stick is fine. This morning I did boot two times with your precompiled files and one time with my old environment (Unraid NVIDIA) with the same USB stick. It was a huge difference here. If you say it can't be, then it must be on my side. Can live with that. Quote Link to comment
ich777 Posted October 21, 2020 Author Share Posted October 21, 2020 5 minutes ago, hawihoney said: Did CHKDSK, stick is fine. Depending on when you pulled the USB flash drive could also cause this messages. But it would be really interesting what the SQUASHFS error is on the first screenshot and why it can't extract the files. 6 minutes ago, hawihoney said: If you say it can't be, then it must be on my side. Can live with that. What machine are you running Unraind on? Is there also a difference between the LSIO images and my images? I boot my main machine within 2 minutes from a coldboot (but it's a server board there the checks from the BIOS take usually a little bit longe, I think the boot process from the Unraid Boot screen to finish takes about 1 minute the last time I've checked - but keep in mind that I have a image with iSCSI, nVidia drivers, MFT Tools and DigitalDevices drivers for my TV Tuners). Are you running my build now? Is there now an error or is it running flawlessly? Have you ejected the USB Flash drive when you put the files on it or how did you put them on the USB Flash drive? Quote Link to comment
hawihoney Posted October 21, 2020 Share Posted October 21, 2020 2 minutes ago, ich777 said: What machine are you running Unraind on? I'm running your build now. As there is a parity sync running right now I don't want to stress the server. NVENC and NVDEC seem to work - tested with my smartphone and a forced reduced bitrate. No errors/warning in syslog from last boot. 1 Quote Link to comment
ich777 Posted October 21, 2020 Author Share Posted October 21, 2020 1 minute ago, hawihoney said: No errors/warning in syslog from last boot. Can't imagine what happened there. I never got an error like this... If you want to contact me again after the parity check and we can investigate further if you want to. Btw. the container has more than 10K pulls on it and I also got about 500+ downloads on the prebuilt images. Quote Link to comment
hawihoney Posted October 21, 2020 Share Posted October 21, 2020 It was the long boot that made me get nervous. I could SSH into the starting server but the server killed the session after a minute or so. That was the point I took the screenshot with the putty error. I was no longer able to SSH into the server. I started IPMI (never needed it for a looooong time) just to find out that there's no JRE on my new laptop. Argh, my fault. So I gave up and rushed downstairs, pulled the USB stick and when I came up again, Unraid was started - without the /boot folder. So I rushed down again, pulled the stick and copied the old Unraid files back to the USB stick and booted. That came up fast and without a problem. In the meantime we chatted here. This morning I've set all Dockers and VMs to not autostart. Starting the full blown server even needs way more time. So I gave your files a second go. I pulled the USB stick from the server again, copied the new files to the stick und pushed it in the server again. This time I gave it moooore time before doing something. And voila, after what seems a very long time, the server came up. I double checked the GPU UUID in your kernel helper GUI, and double checked the device IDs for the 5 passed thru devices (2x HBAs, 1x GPU and 2x USB license sticks). Everything was identical. So I manually started all mounts, VMs and dockers. Everything looks good now. No idea what's hanging during the boots. But for me it's fine. I don't start the server that often. With that server grade backplanes and HBAs you don't need to shutdown the server for e.g. disk replacement in the JBODs. Only disk replacement on the bare metal needs to stop the array. It's running mostly 365/24. Quote Link to comment
ich777 Posted October 21, 2020 Author Share Posted October 21, 2020 5 minutes ago, hawihoney said: Unraid was started - without the /boot folder. Yep, the boot folder is physically the USB flash drive. 7 minutes ago, hawihoney said: Everything looks good now. No idea what's hanging during the boots. If you got the full log of this successful post please send it to me, that's really strange to me and I want to investigate... 7 minutes ago, hawihoney said: It's running mostly 365/24. Since I do much compiling... and also upgrading mostly always to the latest beta I have to reboot it a few more times. Sorry for the inconvenience... But I don't know where to start since this is the first time I experience such an issue... Anyways as I said, it's not much different from the Linuxserver.io images (I think Linuxserver.io saves the version of the driver in a file and the plugin from them reads that and displays it, but I could be wrong on that - my approach there is a little different). Quote Link to comment
188pilas Posted October 23, 2020 Share Posted October 23, 2020 are you able to install zfs 0.8.4 with beta30? I ran the build twice and seems that it excludes the zfs install. Quote Link to comment
ich777 Posted October 23, 2020 Author Share Posted October 23, 2020 45 minutes ago, 188pilas said: are you able to install zfs 0.8.4 with beta30? I ran the build twice and seems that it excludes the zfs install. No because 0.8.4 is only supported up to Kernel version 5.6. Since beta30 is using 5.8.13 it's not possible, it doesn't skip it it's simply not possible and errors out. Quote Link to comment
188pilas Posted October 23, 2020 Share Posted October 23, 2020 k cool that makes perfect sense. I installed 0.8.5 and my zfs pools are theere. I also had an issue with ZnapZend not working but it just needed perl-5.32.0-x86_64-1.txz from nerdpack. 1 Quote Link to comment
ich777 Posted October 28, 2020 Author Share Posted October 28, 2020 The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's. Quote Link to comment
Keek Uras Posted October 29, 2020 Share Posted October 29, 2020 21 hours ago, ich777 said: The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's. Thank you for this! Would you please tell me how to access it because it's not obvious to me? Quote Link to comment
ich777 Posted October 29, 2020 Author Share Posted October 29, 2020 37 minutes ago, Keek Uras said: Thank you for this! Would you please tell me how to access it because it's not obvious to me? First you have to build images with the Unraid-Kernel-Helper Docker with iSCSI builtin (or simply download it form the first post). If you download it from the first post extract the images and replace the files in the root of your USB boot drive (make sure to backup the files in case something goes wrong) and reboot the server. After that the Unraid Kernel Helper Plugin will display the tabs for iSCSI, if not feel free to contact me agin. Please read the discription in the iSCSI tabs very carefully! 1 Quote Link to comment
Keek Uras Posted October 30, 2020 Share Posted October 30, 2020 7 hours ago, ich777 said: First you have to build images with the Unraid-Kernel-Helper Docker with iSCSI builtin (or simply download it form the first post). If you download it from the first post extract the images and replace the files in the root of your USB boot drive (make sure to backup the files in case something goes wrong) and reboot the server. After that the Unraid Kernel Helper Plugin will display the tabs for iSCSI, if not feel free to contact me agin. Please read the discription in the iSCSI tabs very carefully! Thank you so much, my fault for missing that information! 1 Quote Link to comment
ich777 Posted October 30, 2020 Author Share Posted October 30, 2020 5 hours ago, Keek Uras said: Thank you so much, my fault for missing that information! Does everything work now? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.