Leaderboard

Popular Content

Showing content with the highest reputation on 08/02/21 in all areas

  1. Ok, ich habe endlich den Unterschied herausgefunden: Unraid Adaptername : virtio QEMU Adaptername: virtio-net-pci Treiber in VM: virtio_net Unraid Adaptername : virtio-net QEMU Adaptername virtio-net-pci Treiber in VM: virtio_net Unterschied nicht gesehen? Richtig, es gibt auch keinen. Beide sind die selben virtuellen Adapter, die mit den selben Treibern genutzt werden. Der Unterschied liegt ganz woanders. QEMU erlaubt optional, dass der "virtio-net-pci" Adapter direkt über den Kernel des Host schreiben kann. Und genau diese Option namens "vhost=on" ist nur bei Unraids "virtio" aktiv, was man in den VM Logs sehen kann: Und genau deswegen ist die Performance auch so viel besser. Es gibt dazu ein paar interessante Artikel: https://insujang.github.io/2021-03-15/virtio-and-vhost-architecture-part-2/ https://www.redhat.com/en/blog/introduction-virtio-networking-and-vhost-net Die Entwickler haben wohl festgestellt, dass ein paar VMs damit nicht klar kommen. Zumindest sprechen sie in der Hilfe von "stability", weshalb sie "virtio-net" als Standard voreingestellt haben, wo vhost deaktiviert ist. Wer möchte, darf das aber gerne ignorieren und "virtio" nutzen. Falls jemand eine Stellungnahme von Limetech dazu finde, welches OS hier eigentlich der Grund für diese Entscheidung war, darf sie gerne verlinken. Ich konnte jedenfalls nichts finden. Da ich übrigens die Namen sehr verwirrend finde, habe ich einen Bug Report eröffnet:
    2 points
  2. Both settings use the same "virtio-net-pci" device as "virtio-net" is only an alias: qemu -device help ... name "virtio-net-pci", bus PCI, alias "virtio-net" The only difference is that the slower "virtio-net" setting removes the "vhost=on" flag (open the VM logs to see this setting): virtio-net -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \ virtio -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \ And it's absolutelly logic that this causes a bad performance for the "virtio-net" setting as QEMU then creates an additional "virtio-net device": https://www.usenix.org/sites/default/files/conference/protected-files/srecon20americas_slides_krosnov.pdf Instead of sharing the memory with the host: A good write-up can be found here: https://insujang.github.io/2021-03-15/virtio-and-vhost-architecture-part-2/ And now we understand the help text as well: Not sure about the stability thing, but if the Guest supports it, I would use "virtio", which enables vhost. As I think the names of the adapters are confusing I opened a bug report:
    2 points
  3. To kick off our new support forum for My Servers I'll reiterate the change log from our recent releases. Be sure to go to the Plugins tab and check for updates! Recent versions make significant improvements connecting to mothership and staying online, so hopefully we'll see a lot less "unraid-api restart" comments here. We've also added more functionality to the My Servers dashboard - you can now track each server's uptime and see the number of VMs that are installed and running. And of course knocked out several bugs. Unraid 6.10 compatibility is also included, be sure to update your My Servers plugin before installing the upcoming rc. It is coming Soon(TM)! Here are the full release notes: ## 2021.07.21 ### This version resolves: - Issues connecting to mothership - Flash backup issues after uninstall - Docker stats missing in specific conditions - Unexpected disk spin up ### This version adds: - Unraid 6.10 compatibility - Streamlined communication between between unraid-api and mothership - Ability to track server uptime on dashboard - Ability to see number of VMs installed and running - Origin checks to unraid-api ## 2021.07.27 ### This version resolves: - issues with Local Access urls - issues booting when Internet is down ### This version adds: - more Unraid 6.10 compatibility
    1 point
  4. At the moment the two default VM network adapters have the following names: This is really confusing as "virtio" and "virtio-net" are using the same virtual network adapter "virtio-net-pci" (which cost me hours to find out). The only difference is that "virtio-net" disables "vhost-net" access. My suggestion is to use the real name, but with different notes (and its time for more adapters of course):
    1 point
  5. Support for fotosho docker container. Fotosho is a free and open source photo gallery. Organize your photos into albums internally and view as a slideshow. Does not require a database. Does not move, copy or modify your photos. Github: https://github.com/advplyr/fotosho Docker Hub: https://hub.docker.com/repository/docker/advplyr/fotosho
    1 point
  6. Danke an @mgutt und @Ford Prefect für die Antworten und Tests. Für mich kommt unterm Strich heraus, dass ich mein Vorhaben so umsetzen kann, wie ich das geplant hatte. 👍
    1 point
  7. Excellent, glad to hear it! I have had weird bugs since 9.6 and that was one of them. Implemetning VLANs though for my custom IP containers fixed everything. I had the necessary hardware to support VLANs already so it wasn't a big deal for me to change. Glad it's working now though, enjoy!
    1 point
  8. No mate nothing i simpley waited to see that the plugin had been updated, i installed it and all works ok :)
    1 point
  9. Quick update: Since I disabled host access and moved all the bridged docker towards a separate vlan, I Haven't had a crash since then. Stable now for 25 Days. Still have a few of my dockers on host or default bridge, but this is running without any problems at the moment. @JorgeB, thanks for your suggestions and thoughts !!!
    1 point
  10. Ok, the Search index has been rebuilt. Please let me know if any issues persist
    1 point
  11. I ended up using an ubuntu docker image with wine(to run clrmamepro), I had to map every disk to one folder in the docker (/mnt/disk1/emu, /mnt/disk2/emu, ... instead of single /mnt/user/emu) The downsides of this method I have seen so far are: - I'm getting warnings about some sets: "Set exists in various rompaths", probably could be fixed by manually moving files to the same disk - created/moved files are owned by root with permissions 644, I'm using chmod/chown on unraid terminal to fix this for now As for the speed, scanning seems normal(for multiple rom paths) but I'm guessing the writing/fixing could be slower because of parity I tried to use the same docker but with /mnt/user/emu mapped, it seemed like it didn't freeze but the speed was slower. I used the default options for clrmamepro except that I unchecked: Scanner->Advanced->Deeper check for fixable missing files, it might work with that option checked but it takes longer and I didn't need it.
    1 point
  12. i found my script! it was the backup script for a Nextcloud sync that i do. there is a line where i "cd" to a directory then do a rm*. as that directory do not exist, rm was run in default /root i suppose causing the http 500 errors. When reboot occurs, files are restored i guess thus why every reboot worked.
    1 point
  13. got you. the capital I on Initialise. all working now. thank you very much for your help, support and patience!
    1 point
  14. Both methods work, but with host it's more stable in an ipv6 network. I tried to use the custom network solution with ipv6, but it fails if my router gets a new ipv6 prefix. Sadly it's not possible to create custom networks with "dynamic" ipv6 prefixes or automatically update the fixed ipv6 of a container. I will update my post and show the "host" method.
    1 point
  15. No, you can still run the trim plugin, it won't hurt.
    1 point
  16. Machy is just an new gui over some popular systems for chia. what realy does the plotting is plotman. my guess its puts new plots in the lowes line folder. You might want so read up on Archiving here https://github.com/guydavis/machinaris/wiki/Plotman#archiving https://github.com/ericaltendorf/plotman/wiki/Archiving
    1 point
  17. I get that all the time and just ignore it. I haven’t had an issue.
    1 point
  18. See if this applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
    1 point
  19. Yes, you will probably have to remove this part from the go-file once you update.
    1 point
  20. @kennygunit I was able to SSH in to my server. But this will only last until you reboot. sudo nano /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php The permanent way would be to modify the boot via SSH: sudo nano /boot/config/go Paste this at the end: (Shift Insert or Right Click) # Fix Docker - Case Insensitive sed -i 's#@Docker-Content-Digest:\\s*\(.*\)@#\@Docker-Content-Digest:\\s*\(.*\)@i#g' /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php Ctrl X and save then Reboot. All thanks goes to HyperV, Morthan, and ich777 ❤️
    1 point
  21. Pardon my rudeness but your sed command replaces the entire line with just that text. I adjusted it a bit and use this instead, so it only replaces the found text ( I almost always use different delimiters because the slashes get in the way of seeing what is being done with the backslashes in the way) sed -i 's#@Docker-Content-Digest:\\s*\(.*\)@#\@Docker-Content-Digest:\\s*\(.*\)@i#g' /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php I used https://sed.js.org/ to check the syntax
    1 point
  22. I had exactly the same issue and could not find any solutions on the forum or the internet. So I did some digging myself and found the cause of the issue. The docker update check script gets the remote digest of the latest tag from the docker repository via a header called 'Docker-Content-Digest'. The script checks for this header with a case-sensitive regex pattern. Manually querying the docker hub registry gives me a header called 'docker-content-digest' (mind the casing). The docker hub registry must have recently changed the casing of this header, because it broke for me in the last 24 hours. I'm running on Unraid 6.8.3 still, so I'm not 100% sure if this issue also exists in 6.9.x. If you feel up to it, you could quite easily fix this yourself until there is a real fix. I'll describe the steps below: Open file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php Go to line 457. There you should look for the text: @Docker-Content-Digest:\s*(.*)@ and replace it with: @Docker-Content-Digest:\s*(.*)@i Save the file. This will make the header check case-insensitive and should make it work again.
    1 point
  23. For anybody else looking up similar questions some 3+ years later ... The very helpful "archived notification menu" can be found from: webui > tools > archived notifications You can then click on a notification, and it will give you further details, as per extrobe's "NOK" paste.. (Thanks for the pointers, resolved my issue, also ended up being a hot disk )
    1 point
  24. I did a write up on this too because I found little to no info. I am happy that you posted this video. It would have helped me out a lot! Not to hi-jack your thread, but here is my info as well. I used fiber optic connections because I found the NICs to be cheaper. Copper works just as good. Equipment (all retired enterprise equipment from ebay): Cache Drives - 2x Kingston 240gb SSDs in raid 0 Unraid Nic - HP Dual Port 10Gb Ethernet PCIe Card for Proliant 468349-001 ($38.00) Client Nics - HP 10GB MELLANOX CONNECTX-2 PCIe 10GBe ETHERNET NIC 671798-001 ($12.00) Trancievers - 4x HPE BladeSystem 455883-B21 Compatible 10GBASE-SR SFP+ 850nm 300m DOM Transceiver ($20.00) Cables - LC UPC to LC UPC Duplex 2.0mm PVC(OFNR) OM3 Multimode Fiber Optic Patch Cable ($6.00) FYI - for the next person looking to do this here is the process. If you are not able to completely saturate the 10gbe connection, it is likely because the hard drive at one of the ends is too slow. this is why my cache is 2x SSDs in raid 0. Another cause could be that the share you are using it not set to use the SSD cache. Windows 10: 1. install the 10gb NIC and give it a static IP address in a different subnet (if your other NIC's IP is 192.168.0.6 - change the value of the 3rd octet to any unused number 1-254, such as 192.168.66.6) 2. turn on network discovery (without this your computer will not respond to pings or any other kinds of remote requests). Simply open a windows explorer window and click the "network option". it will attempt to find computers and devices on your network, if network discovery has not been enabled you will be prompted to do so. 3. Your done for now, but you will need to force windows to use the 10gb NIC for connections to unraid (keep reading) Unraid: 1. install your 10gb NIC and make sure you can see it in the "system info". Make note of the "eth#" shown 2. go to "settings > network settings" and scroll down to the "eth#" from the previous step. If it is in a "bond group", the array must be stopped and the "eth#" removed. 3. assign the "eth#" a static IP address. this address must be in the same subnet/netmask as your windows PC (in other words, 192.168.66.* - see windows sample from above). 4. hit apply 5. use Putty or something similar to connect to your unraid server. Ping the windows PC's 10gb NIC using it's address (our sample is 192.168.66.6). Because you have multiiple NIC's you must specify what NIC to use for ping. ping -I eth# 192.168.66.6 (the -i switch allows you to ping using a spcific NIC. replace the # with your NIC id). if the windows 10 computer responds all is well. Go back to windows to complete the configuration. if the windows 10 computer does not respond, you missed one of the above steps or have some other issue. Back to Win 10 Again (rhymes right?) There are 2 ways to force windows to use the 10gb connection. For the purposes of documentation (keeping all info in 1 place) I have both listed, but in no way would I ever recommend using method 2. Method 1: add unraid server to your "host" file go to "C:\Windows\System32\drivers\etc" and open the "hosts" file with notepad scroll all the way to the bottom and on a blank line enter the unraid servers 10gb NIC address followed by a space then an alias (I like to use the Hostname) 192.168.66.6 Hostname now save the file and attempt to ping "Hostname". You should find that "Hostname" now resolves to the IP address you have specified in the "host" file. If you ever have need to access the unraid server from this PC but do NOT want to use the 10gb NIC you can simply enter another line in your "host" file. 192.168.66.6 Hostname-10g 192.168.0.6 Hostname line 1: will resolve the alias "Hostname-10g" to the servers 10gb NIC address line 2: will resolve the alias "Hostname" to the servers 1gb NIC address Method 2: NOT RECOMMENDED Windows 10 now has more than 1 NIC that can connect to the unraid server. How will it know what one to use? You would normally go in and change the NIC priority, but this is no good in windows 10. Change the settings all you want and they will just revert back. You are expected to manually change the "metric" (although it's not well documented). go to the "network connections" in the control panel and select the properties of the 10gb connection. now select the IPv4 option in the "networking" tab, then select the "advanced" button in the new window. You should now see the "advanced TCP/IP settings" window. At the bottom of this window uncheck the "automatic metric" button and set a value. keep in mind that lower numbers have a lower priority. I manually set a value of "10" for the 10gb NIC and "20" for my wireless NIC so the priority for one is higher than the other. This method uses one connection as a "failover" for the other. It will always try the 10gb connection first and use the other if 10gb fails. The problem with this is that it uses 10gb for everything and does so at the expense of giving your other NIC a lower priority. Sending BIG files to/from Unraid? Configure Jumbo packets (OPTIONAL): It's basically what it sounds like. You can configure the connection to send larger packets of data at a time. This is more of a feature for those using the 10gb connection exclusively to transfer large files. for this to work jumbo packets must be configured the same on both ends (unraid and windows). If using a 10gb switch, the port(s) used to make this connection must be configured to use jumbo packets as well. Windows 10 Go back to your windows computer and in the 10gb NICs driver properties you should find an option to configure jumbo packets. Simply set the MTU to 9000 and apply your changes. Unraid Go to "Settings > Network settings". Scroll down to your 10gb's "eth#" and change the MTU to 9000 and hit apply. Now Go to "Settings > Global Share Settings -> Tunable" (enable direct IO): set to Yes
    1 point