Skitals

Community Developer
  • Posts

    201
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Skitals

  1. Edit /boot/config/plugins/dynamix.cfg and change the line theme="dark" to theme="white" Or simply delete dynamix.cfg and it will use default settings. The file gets (re)created when you save any changes on the Display Settings page. Before deleting it, I am curious what the contents of your dynamix.cfg are. Does the file exist, and is the line theme="dark" present? To remove the plugin manually, you should just have to delete /boot/config/plugins/dark.theme.plg and the folder /boot/config/plugins/dark.theme/ Most likely it is an issue with dynamix.cfg, though. Lastly, if you could be more specific what you mean by the webgui isn't accessible. What type of error/message are you getting?
  2. I added options to change text and link color.
  3. I figured no one would ever uninstall it Joking aside, yes, it is coded that way. Now that I have a persistent config file, it shouldn't be too much work to save your original setting and restore to that on uninstall. I will add it to my to-do list.
  4. I added the first customization option to the plugin page. You can adjust the grayscale/desaturation of images and icons ui wide. I found some docker icons blinding on the dark theme which is why I shipped with this turned on. Now you can set it to your liking (setting it to 0 turns it off completely). Settings are saved to /boot/config/plugins/dark.theme/settings.cfg and are applied to the css in real-time and at boot, so it is persistent.
  5. Plugin is linked. I need to update the first post!
  6. Thanks, I fixed that. I wasn't sure if it was best practice to delete them or not, since they don't survive reboot.
  7. Updated January 4, 2020: This is nearly a "build your own theme" plugin. It started as a custom version of the "black" theme to be easier on your eyes, now you can adjust a bunch of values from the Dark Theme settings page. Below is a screenshot of the default "Dark Theme" appearance, as well as the settings page with which values you can currently modify. To install, search for "Dark Theme" on Community Application!
  8. I would like the report the same as Nephilgrim on 6.8.0-rc5. I get between 1 and 4 "unexpected GSO type" log messages when I start my docker on br0, and then nothing else. That is with one pihole docker on br0 w/ static ip and one win10 vm also on br0. I have an additional 9 docker containers on autostart in bridge network mode. Edit: Just fired up another (ubuntu) vm and hit speedtest on both VMs while pihole is running and still no more unexpected GSO messages.
  9. I submitted a commit adding a default case to the theme switch statement.
  10. Thanks for the reply. I saw all the hardcoded css, most of it can be themed around using !important. This issue on the Flash page is pretty unique in that the hardcoded css is NOT getting included based on the filename of your theme. In this case the hardcoded css is very important because if you override those elements in the css it breaks a whole bunch of other stuff. Fixing the flash page might be a very very easy fix, and so far it's the only issue with custom themes I've found that can't be worked around.
  11. If you use any custom theme, or simply rename a stock theme, ui elements get broken on the syslinux configuration page. This is easily reproducible with the following: cp /usr/local/emhttp/plugins/dynamix/styles/default-white.css /usr/local/emhttp/plugins/dynamix/styles/default-test.css cp /usr/local/emhttp/plugins/dynamix/styles/dynamix-white.css /usr/local/emhttp/plugins/dynamix/styles/dynamix-test.css Select "Test" theme from Display Settings. Navigate to your-unraid-ip/Main/Flash?name=flash and it will look like the attached screenshot. There is a chunk of css hardcoded on that page that is not getting injected when the NAME of your theme is not default-white or default-black.
  12. Congrats, you actually uncovered a bug in unraid. Using a custom theme (filename) is the cause of what you described. This is easily verified by copying stock default-white.css and dynamix-white.css to, for example, default-test.css and dynamix-test.css. Select the new "Test" theme and you will see the syslinux configuration page is all messed up. I spent hours wondering what I did wrong, applying changes to fix it, and chasing down everything else those broke as a result. Man, the gui is riddled with hardcoded hacks you can't easily theme around. The only solution is to hijack a standard theme. I renamed the file default-black.css and it magically works (after reversing hours of work). So backup default-black.css, copy mine over, and select the "Black" theme.
  13. This has been updated to a plugin so it survives reboots. Please see the new thread here:
  14. Also, to add, I had a LOT of issues getting my machine working. But now that it does, it works flawlessly. I started by installing windows 10 natively onto an nvme. When I first got passthrough working and I installed the amd drivers, I wasn't able to get the vm to start again. I didn't have the exact problem you are describing, but I had a LOT of issues. One thing I did that I thought might have made a difference was booting natively into windows from that nvme (which was now not working as a vm). To my surprise the gpu and drivers worked fine natively. I rebooted into unraid and the vm magically started working again. If you have a spare drive or ssd I would suggest installing windows 10 natively on it, and then passing through that drive instead of dealing with a vdisk. At the very least it helps with troubleshooting because you can always boot directly into windows 10 to see if there is an issue with your windows installation or if its virtualization quirks. Here is a video for reference on installing windows natively and booting it as a vm:
  15. So are you saying if you were to create a new win10 vm right now with a new vdisk, win10 iso, ovmf, vnc, etc it would go straight to the uefi shell? You can't get to the installer? Or you could install it and it works until you try passing through your gpu? I'm afraid that the amd driver install on your win10 vdisk messed up that vm, but it shouldn't affect anything if you create a brand new vm with a new vdisk and nvram. If you can, create a new vm with a new vdisk and install windows 10 and the virtio drivers. Get it working with vnc. Before doing anything else, backup that vdisk so you have a clean baseline! If you are really brave and want to risk fudging up your ubuntu vm, take your working ubuntu vm w/ passthrough and edit the xml. Replace the path in "<source file="/mnt/user/domains/Ubuntu/vdisk1.img"/>" with the path to your new/clean/working win10 vnc vdisk. Don't change anything else. Hit save and start.
  16. I would try the above, it will only take a minute to create a new vm and plug in those options. It sounds the same as the bug I ran into, there is definitely something going wrong with the nvram file and that's the only workaround I found. Create a new vm and pointing to the same exact vdisk builds a new nvram file.
  17. If the above is the case, try this: I think there is something about the 5700XT that fudges up the nvram file and the only solution to create a new vm (unless there is a way to delete/rebuild the nvram file that I am unfamiliar with). It only takes a minute to create a new vm and point it to vdisk you installed windows to. Try doing this: Create a new VM. Select Windows 10 template. From the form view make only the following changes: Hit create and report back. The only way I got my 5700XT working was to go full send like this, creating a new vm pointing to a working windows install with the 5700XT passed from the get go.
  18. So to be clear, you were able to install/boot windows with vnc graphics. You tried passing through your gpu and you only got the uefi shell. You switched back to vnc graphics, and it still only goes to the uefi shell. Is that correct?
  19. Post the xml for both the working Ubuntu vm and the nonworking win10 vm.
  20. The "freezing" after 'Loading /bzroot(-gui)...ok' is normal with efifb:off. It's not recommended to use the GUI mode except for initial setup in case networking isn't working to use the web gui, so try sticking to the normal non-gui mode. If you have ubuntu up and running with vnc graphics, what happens when you passthrough the 5700XT to that vm? Are you trying to passthrough the 5700XT to the windows10 vm before you have windows installed and setup? That is not recommended. It's recommended to get your guest os installed using vnc graphics, install virtio drivers, etc before passing through your gpu. If the highest version of q35 is 3.1 you are running an old version of unraid. I would highly recommend running a verion of unraid with a 5.x kernel since you are using all very new tech (x570 chipset, pcie gen 4 gpu, etc). You also want to use a kernel with navi reset patch, instead of that script you are using to "reset" the card. I would highly highly recommend unraid 6.8.0-RC5 combined with the corresponding custom kernel found in the first thread here:
  21. Everything you have supports uefi and you should use it. Turn on allow UEFI in unraid, disable CSM in bios, and stick to OVMF. If your VMs were created with seabios, they probably do not support uefi. The same goes if, say, Windows was installed on an SSD when in legacy (CSM) mode. If you start a VM in OVMF (uefi) mode and there is no bootable efi partition on your vdisk/passed drive you will end up with what you saw, the uefi shell. So turn on allow UEFI in unraid, disable CSM/legacy boot in your motherboard, and create a NEW vm and install your operating system with OVMF. Your life will be a lot easier if you use "video=efifb:off" as a kernel parameter in syslinux.cfg. This completely disables the framebuffer for unraid and vfio will have an easier time binding your gpu. With a single GPU I believe you will need to pass a vbios. Pass AMD Navi 10 HDMI Audio with the gpu. The other one you tried is the onboard motherboard audio and it WILL hard lock unraid. Oh, and using Q35-4.0.1 or newer. v3.1 works but you will only get gen3 pcie speeds by default. v4.0.1 and newer are gen4 pcie speeds by default. That should give you a place to start!
  22. https://wiki.unraid.net/Building_a_custom_kernel As the warning notes this is very outdated, but the basic steps are the same. If you can't adapt this (substituting the proper packages, kernel source, headers, download the correct unraid package and applying the unraid .patches, etc) you are probably in over your head. https://gist.github.com/gfjardim/c18d782c3e9aa30837ff This script is slightly newer, which you can also analyze to see the basic steps. If you need guidance beyond that I would say building a custom kernel is ill advised.
  23. Can you explain your problem? Is it locking up at boot, or randomly during use? With my 5700 XT, I would randomly crash unraid during gaming or even just using chrome in the win 10 vm. I made a ton of changes, but I finally got it stable. I settled on 6.8.0-rc5 with the kernel from the first thread. I added a second GPU in slot 3 I have set in my bios as the initial video device. With this setup I no longer have to pass a vbios to the vm. I also updated to Adrenalin 2020 drivers. With those changes, instead of locking up the entire host I would "only" lose signal where I was previously crashing. I could hear game audio continue, but the only recourse was the force stop the vm. The final fix was to DISABLE Radeon Anti-Lag and Radeon Enhanced Sync in Adrenalin. With that final change I am 100% stable and have no problem restarting my vm. That's a long way of saying check if you have Radeon Anti-Lag and Radeon Enhanced Sync on and turn them OFF. They default to on, at least in Adrenalin 2020. The second gpu might not be necessary, it might have just changed the behavior from crashing all of unraid to only having for force stop the vm. Also, use q35-4.0.1 (or newer) if you want gen4 pcie speed without xml changes.
  24. No experience with aida64, but here are my results (aida64 v6.20.5300) Memory Read: 53295 MB/s Memory Write: 44206 MB/s Memory Copy: 51345 MB/s Memory Latency: 87.4 ns Edit: I had noticed high cpu usage while the guest report 1%, changing this from no to yes helped a bit and brough latency "down" to 83.3 ns. Still seems a bit high, yes? Edit 2: Okay, I got memory latency "down" to 79.0ns AND reduced my idle cpu usage to practically 0% by making these two changes: Switched the usb controller to 3.0 nec. Turned hpet back OFF and added these hyperv flags: Here are the new results: Memory Read: 56165 MB/s Memory Write: 44619 MB/s Memory Copy: 52043 MB/s Memory Latency: 79.0 ns