Leaderboard

Popular Content

Showing content with the highest reputation on 07/05/21 in all areas

  1. ------------------------------------------------------------------------------------------------------------------ Part 1, unraid write amplification ------------------------------------------------------------------------------------------------------------------ To start out with, this is just a journal of my own experiences dealing with these writes, you do any and all of these commands at your own risk! Always make a backup of any data/drives you will be messing with. I would recommend getting a fresh backup of your cache drive and docker/appdata before starting any of this, just in case. Honestly it is a good excuse to update your complete backup. Ok, so as many have started to realize unraid had a serious issue with massively inflated writes to the cache SSD for the last few years. To the point it has killed a number of SSD's in a very short amount of time and used up a hefty amount of the life of many other drives. A lot of it was documented here: but instead of reading all that I am going to give you the results of all the testing that went on in that thread. My writes when starting this journey with far less dockers then I have now was around : ~200gb+/day IIRC (forgot exact numbers and lot my notes from that long ago but it was a LOT) The first step to reducing writes is to update to upraid 6.9+ and then move all the data off your cache SSD's to the array temporarily. You will then erase the cache pool using the built in erase option and reformat it when you restart the array. This fixes the core unraid side of the excessive writes. It fixes some partition and mounting issues with the filesystem. After that move the data back to the cache from the array. This dropped my writes to around ~75-85gb/day using a single BTRFS formatted drive with BTRFS image. Formatted as XFS with BTRFS image my writes dropped to ~25gb/day but you can't have redundancy then and has it's own issues. The excessive writes still persist as you see just to a lesser extent after this, the remaining writes will be dependent on what dockers you are using and is an issue with docker. ------------------------------------------------------------------------------------------------------------------ Part 2: Docker logs causing write inflation ------------------------------------------------------------------------------------------------------------------ All the docker commands I will put below need to be entered into the Extra Parameters: section of the docker template in unraid (you will need to go to the advanced view in the top right corner) To match up a long container ID with container in unraid GUI, simply use crtl+f to search the docker page in unraid for the container ID you see in the activity logs. Generally the first 3 or 4 characters are enough to find the right container. There are a few basic places writes come from with docker and each has it's own fix. ------------------------------------------------------------------------------------------------------------------ The first step is to run the inotifywait command from mgutt: This command will watch the internal docker image for writes and log them to /mnt/user/system/recentXXXXXX.txt inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /var/lib/docker > /mnt/user/system/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt An alternate and less effective method is to use this command to return the 100 most recently modified files in the docker image find /mnt/user/system/docker -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n100 I chose to make a userscript with the first command and then use the "run in background" option so I don't have to keep the terminal open. To kill the inotifywait run this: pkill -xc inotifywait ------------------------------------------------------------------------------------------------------------------ For me the first and most common writes came from the internal logging mechanism in docker. It basically logs all the messages that would show up in the terminal if it was run directly and not in docker among other stuff. These writes will be to: /var/lib/docker/containers/{containerid}/{containerid}-json.log These are stopped by the following command while leaving the unriad GUI logs in tact: --log-driver syslog --log-opt syslog-address=udp://127.0.0.1:541 ------------------------------------------------------------------------------------------------------------------ The next type of writes are from: /var/lib/docker/containers/{containerid}/container-cached.log These are the logs you see when you click the log option in the unraid gui, these require a stronger version of the above command: --log-driver none This disables both the above type of logs. ------------------------------------------------------------------------------------------------------------------ Next up are the helthcheck logs, these are seen as writes to these files: /var/lib/docker/containers/{containerID}/.tmp-hostconfig.json{randomnumbers} /var/lib/docker/containers/{containerID}/.tmp-config.v2.json{randomnumbers} These are solved by either extending the health checks or disabling them. I prefer extending them to ~1 hour. --health-interval=60m They can be disabled completely with: --no-healthcheck ------------------------------------------------------------------------------------------------------------------ The next type of writes are internal logs from the program in the container to the /tmp directory of the container /var/lib/docker/containers/{containerid}/tmp/some type of log file Or /var/lib/docker/containers/{containerid}/var/tmp/some type of log file or /var/lib/docker/subvolumes/{Randomstring}/tmp/some type of log file This last one is hard to figure out as it can be difficult to connect the subvolume to a container, sometimes opening the log file in question can clue you into what docker it is for. This is a more advanced rabbit hole that was not really nesscary to chase in my case This is from a program thinking it is writing to a ramdrive but by default docker does not map a ramdrive to the /tmp directory. You can do it yourself easily though with the following command (can be adapted to other dirs and use cases as well). This command creates a ramdrive in /tmp with full read/write permissions and a max size of 256mb (much larger then needed in most cases but it only uses ram as needed so should not hurt anything in most cases, you can make it smaller as well): --mount type=tmpfs,destination=/tmp,tmpfs-mode=1777,tmpfs-size=256000000 And thats pretty much it for properly created containers. Doing these commands to the worst offending containers dropped my writes down to around ~40gb/day I left a few containers logs in tact as I have needed them a few times. ------------------------------------------------------------------------------------------------------------------ Part 3: Dealing with appdata writes ------------------------------------------------------------------------------------------------------------------ After this things get a bit more complicated. Each container will behave differently and you will kinda have to wing it. I saw random writes to various files in containers, sometimes you could change the logging folder in the program to the /tmp folder and add a ramdisk to the container. Others you cap map another ramdrive to some other log folder and still others can use other workarounds unique to that specific program. It takes some know-how and digging to fix writes internally in the dockers. The alternate and universal option (and required option in many cases) is to simply copy the appdata folder to a ramdisk on unraid and sync it back to the SSD hourly. This works with any docker and vastly reduces writes from logs / constant database access. Like above first you need to log the appdata folder to see where the writes come from: This command will watch the appdata folder for writes and log them to /mnt/user/system/appdata_recentXXXXXX.txt inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata/*[!ramdisk] > /mnt/user/system/appdata_recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt From here it will take some detective work to find the misbehaving containers and see what and where they are writing to. In my case all the *arr's (sonarr etc) were causing a lot of writes and there was nothing that could be done internally to fix it. After figuring out which appdata needs to move to the ramdisk is to create the ramdisk itself and then copy the appdata into it from the SSD. First create a folder in /mnt/cache/appdata/, Very important to create the folder on the drive itself and NOT in /user. mkdir /mnt/cache/appdata/appramdisk chmod 777 /mnt/cache/appdata/appramdisk after this I use a very basic user script that is set to "run at array start", adjust the max size of the disk to suite your use case, it only uses ram as needed so there is not a lot of harm in making it too big as long as it leaves enough room for everything else to run. You will need to customize the rsync commands with the folders you want to copy naturally. #!/bin/bash #description=This script runs at the start of the array creating the appdata ramdisk and rsyncing the data into it echo ---------------------------------Create ramdisk for appdata---------------------------------- mount -vt tmpfs -o size=8G appramdisk /mnt/cache/appdata/appramdisk echo ---------------------------------rsync to ramdisk in appdata---------------------------------- rsync -ah --stats --delete /mnt/user/appdata/binhex-qbittorrentvpn /mnt/user/appdata/appramdisk rsync -ah --stats --delete /mnt/user/appdata/binhex-nzbhydra2 /mnt/user/appdata/appramdisk rsync -ah --stats --delete /mnt/user/appdata/*arr /mnt/user/appdata/appramdisk I then have a separate script set to run hourly that rsync's everything in the ramdisk back to the SSD, it only copied the data that was changed to save writes: #!/bin/bash #description=This script syncs the ramdisk appdata back to the ssd rsync -ahv --progress --delete /mnt/user/appdata/appramdisk/* /mnt/user/appdata/ You will also need to apply a delay to the first docker container that is set to autostart in the unraid GUI (enable advanced view, right side of the container). Preferably put a container that is not being run out of the ramdisk first and put the delay on it as the delay takes effect after the selected container has started. The delay needs to be long enough for the ramdisk rsync to complete. UPDATE THE DOCKER APPDATA FOLDER TO USE THE NEW "appramdisk" copy of the appdata or it will just keep writing to the cache. Now for a clean shutdown, I created a "stop" file on the USB drive at /boot/config. It is called first thing when you click shutdown/reboot in the GUI and the rest of the shutdown will wait until it is finished. touch /boot/config/stop In the stop file I decided to simply redirect it to a script in user scripts called "Run at Shutdown" to make it easier to manage. #!/bin/bash #Runs the user script "Run at Shutdown" during shutdown or reboot. #it is called before anything else during the shutdown process # Invoke 'Run at Shutdown' script if present if [ -f /boot/config/plugins/user.scripts/scripts/Run\ at\ Shutdown/script ]; then echo "Preparing Run at Shutdown script" cp /boot/config/plugins/user.scripts/scripts/Run\ at\ Shutdown/script /var/tmp/shutdown chmod +x /var/tmp/shutdown logger Starting Run at Shutdown script /var/tmp/shutdown fi The run at shutdown script itself first stops all running docker containers so they can close out open files. It then rsyncs the appramdisk back to the SSD before clearing the ramdisk and unmounting it. #!/bin/bash #description=This script runs first thing at shutdown or reboot and handles rsyncing appramdisk and unmounting it. logger Stopping Dockers docker stop $(docker ps -q) logger Dockers stopped logger Started appramdisk rsync rsync -ah --stats --delete /mnt/user/appdata/appramdisk/* /mnt/user/appdata/ | logger logger rsync finished logger clearing appramdisk data rm -r /mnt/user/appdata/appramdisk/* | logger logger unmounting appramdisk umount -v appramdisk | logger And thats it, seems to be working good, no hang-ups when rebooting and everything is working automatically. Risks are minimal for these containers as worst case I loose an hours worth of data from sonarr, big deal. I would not use this on a container that has data you can't afford to loose an hours worth of. The writes are finally low enough that I would be ok putting appdata and docker back onto my main SSD's with redundancy instead of the single piece of junk drive I am using now (which has gone from ~98% life to 69% in the last year doing nothing but handling docker on unraid) I am really impressed with how well this is working. So to recap: unraid 6.8 > BTRFS image > BTRFS formatted cache = ~200gb++/day unraid 6.9 > BTRFS image > separate unprotected XFS SSD everything stock = ~25GB/day unraid 6.9 > BTRFS image > BTRFS SSD everything stock = 75-85GB/day unraid 6.9 > Docker Folder > BTRFS SSD everything stock = ~60gb/day unraid 6.9 > BTRFS image > BTRFS SSD > Disabled the low hanging fruit docker json logs = ~48gb/day unraid 6.9 > BTRFS image > BTRFS SSD > Disabled all misbehaving docker json logs for running containers except those I want to see + added ramdrives to /tmp in containers that do internal logging = ~30gb/day unraid 6.9 > BTRFS image > BTRFS SSD > Disabled all misbehaving docker json logs for running containers except those I want to see + added ramdrives to /tmp in containers that do internal logging + moved appdata for the *arr's and qbittorrent to a ramdisk with hourly rsyncs to the ssd appdata = ~10-12gb/day Since most of the writes are large writes from the rsync, there is very little write amplification which vastly improves the total writes from the day even though that is possibly more raw data being written to the drive. I dont use plex but it and database dockers are known for being FAR worse then what I run in writes. People were regularly seeing hundreds of GB in writes a day from these alone. They could be vastly improved with the above commands.
    1 point
  2. Unraid is a great tool, making very complex system administration functions seem like playtime. This is wonderful, and every release keeps adding great new features. One thing that I think is sorely missing is an official backup & restore tools and guide. I know there are many forum posts and guides and plugins, and even a flash backup tool built in. None of this makes it easy or clear on how to properly backup ALL the configuration and data in a simple and safe way. Obviously, the shares are critical, and backup of this data should be handled by standard backup tools. What I am talking about is a way to export/import/migrate: Configuration Settings Installed Plugins and their configuration Docker settings VM settings Users and Share settings per user I think it would be useful and provide peace of mind if there was an official tool (or at least an official updated guide on how) to backup and reproduce an unraid system upon failure or desire to migrate hardware. Unraid is a digital hoarders paradise. But how to we make a copy of paradise! Clearly, there is a need for this, something that covers everything, OFFICIALLY. Here is just a sampling of people trying to figure out how to do this with a hodgepodge of solutions: https://forums.unraid.net/topic/72090-backing-up-the-unraid-server/ https://forums.unraid.net/topic/86303-vm-backup-plugin/ https://forums.unraid.net/topic/61211-plugin-ca-appdata-backup-restore-v2/ https://forums.unraid.net/topic/81233-unraid-671-easy-way-to-backup-and-restore-specific-dockers https://forums.unraid.net/topic/102669-unraid-backup-strategy https://forums.unraid.net/topic/81740-a-reliable-backup-solution-with-gui-ca-backup-restore-or-duplicati/ https://forums.unraid.net/topic/79539-best-way-to-backup-unraid-config https://forums.unraid.net/topic/100202-latest-super-easy-method-for-automated-flash-zip-backup https://forums.unraid.net/topic/68970-kvm-backups-snapshots https://www.reddit.com/r/unRAID/comments/dcuxqc/community_backup_methods/ There are probably 100+ threads like this all over the internet.
    1 point
  3. Security Violations To expand a little further, this means that any attempt to inject code into the GUI, or an attempt to run any additional bash commands after the docker run command is issued via the template is a security violation. It doesn't matter if the command is benign or not. Any attempt results in automatic blacklisting of the app. If the app happens to require the additional commands, include instructions in the Overview for the user to edit the template accordingly. There are no exceptions ever. It should be noted that within Config elements any description that can be construed as an html tag is technically a violation. Do not have a description of "<some_var>" Use instead "some_var"
    1 point
  4. Seems about right for 8TB, maybe a little better than I get.
    1 point
  5. 1 point
  6. @eds in case, take a look at your dns settings in CF, turn off "proxy" feature (orange cloud = on, grey = off, then its just like a dns service ... but like i said, rtmp is a different protocoll and needs some setup (to use the rtmp module) to proxy a rtmp stream, which is then not going through the http part and not really using the domain selection, you will figure it. https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/ there you see the server block (streaming server part on its own port, not http by domain ...), take a look if you really want to go through it.
    1 point
  7. Hallo @i-B4se ... ich bin über die CPU Kühler-/Lüfterausrichtung verwundert. Im "Vorher" Aufbau wurde seitlich ausgeblasen. ist Dein Rack dort seitlich gelocht oder ließt Du die Seitenwand bei hoher Last offen? Im "Nachher" Aufbau hast Du die CPU Ausrichtung beibehalten und bläst die warme CPU Luft dem Netzteil zu, obwohl hinten die Noctua direkt ausblasen. Ist das nicht anders machbar oder übersehe ich da einen direkten Nutzen? Oder kann man auf dem Mainboard die Kühler nicht um 90Grad gedreht montieren?
    1 point
  8. Da hast du wohl recht, bei weniger RAM Zuweisung funktioniert es. Danke für die Hilfe
    1 point
  9. Yup, but this is without the SMB multi channel feature that Jon is talking about Spoiler....
    1 point
  10. It should not, but please test. Host access is a hack to circumvent the network protection of docker itself. Normally host access is not required and should be disabled.
    1 point
  11. Thanks Simon, Sorry didn't get time to make last adjustments. I changed the USB Setting to 3.0 (qemu XHCI) but the VM then seemed to stall every couple of minutes for a few minutes with the VM showing 10 out of the 12 threads maxed out. However there was also a windows update applied at the same time. so I turned it back to 2.0 and everything worked as it was before so I assumed it wasn't the update causing it to stall. I just tried the 3.0 nec XHCI version out of curiosity and forgot to assign the usb in the settings. However i added it via the usb manager and it attached and seams to be working fine. So I figure the problem was I have too Many USB Attached, and changing to 3.0 solved it. For anyone who might be having a Similar problem, thanks for your help.
    1 point
  12. Ja, aber nur bei Disk Shares - jedenfalls bei mir. Wenn ich von \\Server\UserShare\Ordner nach \\Server\UserShare\Ordner2 verschiebe, dann geht das durchs LAN. Verschiebe ich hingegen den selben Ordner via \\Server\DiskShare\Ordner nach \\Server\DiskShare\Ordner2 dann läuft das ausschließlich auf dem Server.
    1 point
  13. I was thinking that might be what was needed now. I just have to say thanks though, you have been very helpful and have gone above and beyond in helping and for that I am grateful. I already have my appdata backed up which is the important bit and there is maybe one or two small things for convenience I will copy off and I'll just recreate the cache from scratch. Again, you have been great. 👍
    1 point
  14. It seems like you have created plots_dir yourself and not used the one in the template. Try to look after it again. Remember to look under "show more settings"
    1 point
  15. Hier in dem Aprilscherz ab 06:15 hat spaceinvaderone Fake Datenträger über VMDisks durchgeschliffen: Ich denke mit der Methode solltest du das hinbekommen. Ansonsten pro Unraid VM eine kleine SATA SSD durchschleifen?!
    1 point
  16. Run xfs_repair like I type above, without -n, or nothing will be done. Disk look fine, replace the SATA cable then copy all the data to the new disk in the array, replacing existing files, this will fix any corrupt files. Alternatively you could run a binary file compare utility to detect the corrupt files, but it will take about the same time.
    1 point
  17. I have not yet updated to the newest test version. Don't know what changed there. I am running the 4 days old test version.
    1 point
  18. Actually I’m using ProperTree.
    1 point
  19. You need to specify the partition at the end: xfs_repair -v /dev/sdc1 to change the UUID you can use: xfs_admin -U generate /dev/sdc1 These errors are usually a bad SATA cable, please post a SMART report for that disk. That's good but don't forget that because of the read errors during the rebuild there can be more corrupt files, unless those sectors were unused.
    1 point
  20. Check your lost+found share for files put there by repair
    1 point
  21. Do you have any evidence that the drive itself had failed? More likely is that you disturbed its connections when replacing the other disk. Do you still have that original disk? Can you mount it with Unassigned Devices?
    1 point
  22. That is because of the filesystem corruption on disk5, see below how to fix it, naturally that won't fix the corrupt files. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui Run it without -n or nothing will be done, if it asks for -L use it.
    1 point
  23. i could come up with a few things Try new psu In the tweak settings try performance settings away from default “on demand” flash bios on motherboard try new vm, fresh installs and see if same crash occur
    1 point
  24. Enjoyed the podcast, but IMHO more important than using other things to get around the performance penalty introduced by user shares would be to try and improve that, for example I still need to use disk shares for doing internal transfers if I want good performance, and any SMB improvements won't help with that, e.g., transfer of the same folder contents (16 large files) from one pool to another done with pv, first using disk shares, then using user shares: 46.4GiB 0:00:42 [1.09GiB/s] [==============================================>] 100% 46.4GiB 0:02:45 [ 286MiB/s] [==============================================>] 100% If the base user shares performance could be improved it would also benefit SMB and any other transfers.
    1 point
  25. There are read errors on the failing device and because of that some data can't be moved to the other one, there's still a lot of data remaining on the that device, you'll need to back up whatever you can to the array or other device then recreate the cache.
    1 point
  26. That is expected, the 1st rebuilt disk would be corrupt because of the read errors, doesn't matter if it was empty, this would then translate to corruption on the next rebuild. You can run ddrescue on the failing disk, this way you can at least know which files are corrupt after the clone.
    1 point
  27. Sorry, my fault, I forgot about the metadata, it's still raid1, first convert it to single also: btrfs balance start -f -mconvert=single /mnt/cache Then do the above.
    1 point
  28. Excellent, that did the trick. Thanks for your help
    1 point
  29. I have to look into this, can take a while since I don't own a UPS. No it isn't, you can change it by editing the file /boot/config/plugins/prometheus_pihole_exporter/settings.cfg on your USB boot device and edit the port there. EDIT: I will update the configruation page from the plugin so that the port is displayed/changable from there in one of the next updates. Are you running the two instances with keepalived? Theoretically it's possible but you have to put in a line in your go file so that you start a second instance of the exporter on another port. Does it say on the plugin page that it's running and in the Prometheus WebGUI that it's up? Do you run your containers in a custom network on br0? If yes you need to enable Allow Host Access in the Docker settings.
    1 point
  30. Version 1.10.1 has been released and container updated.
    1 point
  31. I would suggest this since there's a suspect device, so the quicker it's done the better, but note that if there are read errors there will be problems, but it would be the same if you try to convert to raid1.
    1 point
  32. Since the pool is now in single mode and has a possible failing device you can try to remove it now instead of converting to back to raid1 and then removing, but to remove a device from a single profile pool you can only do it manually, before starting it's a good idea to make sure backups are up to date, then: -with the array started type in the console: btrfs dev del /dev/sdb1 /mnt/cache -if the command aborts with errors post new diags, if the command completes without errors and you get the cursor back stop the array -unassign both cache devices -start array -stop array -assign the Samsung cache device only -start array -done
    1 point
  33. No, there is some problem with pool, probably because of the failing ADATA device but can't see what it was in the diags posted.
    1 point
  34. It's not mounting because you converted the pool to single profile then removed a device, that's not possible, you can only remove devices from a redundant pool, this might work: -stop array -unassign all cache devices -start array -type on the console (if you rebooted since the diags make sure the ADATA SSD is still sdb): btrfs-select-super -s 1 /dev/sdb1 -stop array -assign both cache devices, there can't be an "all data on this device will be deleted" warning for any of the cache devices -start array -post new diags.
    1 point
  35. Please post current diagnostics: Tools -> Diagnostics
    1 point
  36. finally figured this one out in case it helps anyone. the issue I was have was that the disk would not format due to the protection so I needed to disable it. there is a PSID printed on the disk. i used this to reset the disk, by usng the below command where <PSIDNODASHS> is the PSID on the disk and device> is the device sedutil-cli --PSIDrevertAdminSP <PSIDNODASHS> /dev/<device> I was then able to run the format command and Type 2 Protection was now removed. Hopefully this will help someone Thanks everyone for helping
    1 point
  37. I liked the look of these so I borrowed the idea, made it a bit darker and added the texture from the back of the new Apple Pro XDR display. I decided I had spent enough time on it today, but I'll probably keep tweaking it to make it a bit more polished.
    1 point
  38. https://wiki.unraid.net/Manual/Storage_Management#Removing_disks_from_a_multi-device_pool Also
    1 point
  39. I guess I should update this. I've been working really solid for weeks now. I did one test where I enabled "Access to Custom Networks" and within hours it had locked up with the same output as above. So for me, the solution was to disable access to custom networks in the Docker settings.
    1 point
  40. I changed to another repo please give it a try after few hours when the templates update in CA
    1 point
  41. Um, wow. So with the latest changes with the appdata ramdisk, my writes for the last day was a mere 8gb?!? Now the rsync cron didn''t work for the first few hours for some reason so changed it to hourly for the last ~13 hours, so the real writes every 2 hours would most likely be a bit more but honestly might just stick with every hour if it is only ~16gb of writes. So to recap: BTRFS image > XFS SSD everything stock = ~25GB/day BTRFS image > BTRFS SSD everything stock = 75-85GB/day Docker Folder > BTRFS SSD everything stock = ~60gb/day BTRFS image > BTRFS SSD > Disabled the low hanging fruit docker json logs = ~48gb/day BTRFS image > BTRFS SSD > Disabled all misbehaving docker json logs for running containers except those I want to see + added ramdrives to /tmp in containers that do internal logging = ~30gb/day BTRFS image > BTRFS SSD > Disabled all misbehaving docker json logs for running containers except those I want to see + added ramdrives to /tmp in containers that do internal logging + moved appdata for the *arr's and qbittorrent to a ramdisk with ~bi-hourly rsyncs to the ssd appdata = ~8-12gb/day Since most of the writes are large writes from the rsync, there is very little write amplification which vastly improves the total writes from the day even though that is possibly more raw data being written to the drive.
    1 point
  42. It really depends on how you use the vm. Most of users, including me, are using one or more vm just as if they use a traditional pc: if this is the case, we want performance on our vm, so we start to passthrough hardware, cpu, gpu, sata controllers. nvme drives, usb controllers, ethernet cards, etc. Why we do this? In my case I'm using a mac os vm with most hardware passed through, I decided to go with a vm because it's faster to set up the environment and you have less headache, moreover I have a complete separated environment, so the bootloader cannot mess with windows 10 installed on another drive, which I boot bare metal. Others prefer performance vms because they can have "more computers" into the same pc, for example different vms for different operating systems, different vms for different fields (school, work, media, firewall, gaming, etc.). Virtual machines can boot uefi with ovmf, so the malware will act the same if it finds a vulnerability in the firmware: but in this case the firmware is a file (OVMF_CODE and its OVMF_VARS), so if it gets infected all you need to do is delete the files and replace instead of flashing the bios chip. But if a malware infects the os in the cases I described above it's near the same as having a malware on a bare metal installation. Another case is if you use vms in a different way, consider for example online services for antivirus scan, all the malwares run on virtual machines which are created and deleted before and as soon as the scan finishes: the base os can be in a vdisk and all you have to do to start fresh is delete and replace the vdisk (some seconds?). Or if you need only few apps in your vm, installed in vdisk: again backup a copy of the base vdisk and of the firmware and if you get infected just start fresh in few minutes. What microsoft is choosing, i.e. add secure boot and tpm as mandatory (in addition to a series of other things), doesn't agree with me (but this is a personal opinion, I am the owner of my pc and I want to do all that I want, without having limits).
    1 point
  43. yes, its log level INFO and therefore can be ignored, a quick google reveals its something to do with compression of blocks, no doubt it helps with performance in some manner. its currently one world, one docker, so you need to create another docker container (changing port and /config location) if you want multiple worlds running at the same time. see above.
    1 point
  44. I'm really tempted to change the title of this thread to something like Cipher Message of the Month or just Soon™️ Especially when the forum automatically inserts the "4 weeks later..." 🤣
    1 point
  45. Small report from my side. Yesterday I played around a bit with the leaked win11 build on Unraid VMs. Install on Seabios and OVMF both worked fine. Fresh install of Win11 home without network isn't possible. You need network access on a fresh Home install. Pro works without internet and the use of a local profil is possible Upgrade from Win10 Home to Win11 Home worked without internet access Upgrade from Win10 Pro Seabios or OVMF both worked without any problems Upgrade from Win10 German to Win11 (only available in english) also worked all tests with default Win10 template, only changes I made are on the BIOS versions, 8GB Ram, 60GB vdisk, 6 cores Not sure what the future limitations like secureboot and tpm2 are, the current build is more like a new Win10 version.
    1 point
  46. Everything. You have switched value and key and also remove =. Default value is also not -e. Just leave it blank.
    1 point
  47. If you go to setting and go to v0.3.4. It works fine for me.
    1 point
  48. This was a change on the ookla side, and not related to the 6.9.x series updates. My unraid software is a few versions older, and this issue started April 7th for me as well. Did install the henrywhitaker3/speedtest-tracker after this plugin broke. Still prefer the plugin because it can highlight speed tests below thresholds for both down and up tests, and why I spent some time this weekend looking at the code for the speedtest plugin. It seems someone else here noticed a new version of the script and asked if dmacias could update the plugin. After looking at the code more I found the repository for the speedtest-cli update that is used within this plugin. That file, as of now, was last updated on April 8th to version 2.1.3 by the maintainer of that repo. Applied this version to my system and the speedtest plugin works again. Providing here for others, if so interested. ** Notice: Not claiming to be the creator of the plugin, or the creator of the speedtest.py script the plugin uses. Providing a workaround to get the plugin back up and operational. ** Disclaimer: Follow these steps at your own risk. This info is provided as-is. Understand and know what these steps are doing. If you do not know, then don’t run them! Downloading and modifying this file in /tmp to not cause issues with real data, this was done via ssh to the unraid server. cd /tmp mkdir speedtest-new cd speedtest-new Now we grab the new version of the script (raw repository location): wget https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py The downloaded file, when viewed on the speedtest plugin settings tab will show a lot more “version” info. like this image. Updating the "def versoin():" function will change from this image on the setting page to showing just 2.1.3. This is optional. Open speedtest.py and look for “def version():” and update this function to look like below:: Save and exit your editor once this section looks like above. Changing the name of the file, to know what version it is. Currently still in our download location, in this example: /tmp/speedtest-new mv speedtest.py speedtest-2.1.3.py Now that we have the file, updated the version function (if you did that) and renamed we can place in the speedtest plugin location. Use this command to copy to the speedtest plugin location: cp speedtest-2.1.3.py /usr/local/emhttp/plugins/speedtest/scripts Change into the speedtest scripts location, backup the current script. Then rename the new version file to the speedtest.py file name and lastly make our new script executable: cd /usr/local/emhttp/plugins/speedtest/scripts ls -ltr verify you see “speedtest-2.1.3.py”. If not go back to the location you downloaded and copy to this location again. Now backup the old 2.0.0 version file, I’m assuming that is the latest version you had working: cp speedtest.py speedtest-2.0.0.py Copy the new file over and make executable: cp speedtest-2.1.3.py speedtest.py chmod 755 speedtest.py verify rwxr-xr-x for speedtest.py: ls -ltr Change to the webui settings page for the speedtest plugin. Refresh the page and the version drop down should now show 2.1.3 (or like the image above with more detailed version info). Hit the apply button. Now hit the Begin Test button to verify speedtest plugin is working again for you. The last thing to do is to make this survive a reboot. For me that was making a custom directory in the /boot location (this is your flash device, take care when executing commands here. You can wipe your unraid flash device or make unintended change If you do not know what you are doing). Then copy the speedtest-2.1.3py file from tmp to here. Either in the “go” file or from the “user scripts” add-on you can copy the file (and rename in one step) to the speedtest scripts location above. Be sure to include a step to chmod the file in your go or user scripts. Please note the copy, and chmod, need to be done after plugins have been installed, otherwise the change will be overwritten – or the copy could fail since the directory wouldn’t exist. You could copy the plugin .tgz file to another computer, unzip (7zip works) find the file and replace it. Then re-zip as (as a TAR archive compresses in GNU Zip) replace the file in the plugins folder on the flash (/boot).
    1 point
  49. It's not anymore. You can now select some prerequisites from the NERDpack and then use pip3 to handle the install. I've edited to my go (/boot/config/go) file: # Since the NerdPack does not include this anymore we need to download docker-compose outselves. # pip3, python3 and the setuptools are all provided by the nerdpack # See: https://forums.unraid.net/index.php?/topic/35866-unRAID-6-NerdPack---CLI-tools-(iftop,-iotop,-screen,-kbd,-etc.&do=findComment&comment=838361) pip3 install docker-compose I must say that you curl solution looks clean and doesn't require setting up additional dependencies via the NERDpack in advance. Bookmarked in case the pip3 solution might fail later on EDIT: The curl solution seems to have been truncated @juan11perez. I wanted to use this in my Duplicati container but noticed that the second line is missing its end. The docker docs noted the full command: COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep 'tag_name' | cut -d\" -f4) curl -L https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose You may want to edit your example in case anyone want to use. Thanks!
    1 point