Leaderboard

Popular Content

Showing content with the highest reputation on 08/11/21 in all areas

  1. I like to highlight other improvements available in 6.10, which are maybe not so obvious to spot from the release notes and some of these improvements are internal and not really visible. - Event driven model to obtain server information and update the GUI in real-time The advantage of this model is its scalability. Multiple browsers can be opened simultaneously to the GUI without much impact In addition stale browser sessions won't create any CSRF errors anymore People who keep their browser open 24/7 will find the GUI stays responsive at all times - Docker labels Docker labels are added to allow people using Docker compose to make use of icons and GUI access Look at a Docker 'run' command output to see exactly what labels are used - Docker custom networks A new setting for custom networks is available. Originally custom networks are created using the macvlan mode, and this mode is kept when upgrading to version 6.10 The new ipvlan mode is introduced to battle the crashes some people experience when using macvlan mode. If that is your case, change to ipvlan mode and test. Changing of mode does not require to reconfigure anything on Docker level, internally everything is being taken care off. - Docker bridge network (docker0) docker0 now supports IPv6. This is implemented by assigning docker0 a private IPv6 subnet (fd17::/64), similar to what is done for IPv4 and use network translation to communicate with the outside world Containers connected to the bridge network now have both IPv4 and IPv6 connectivity (of course the system must have IPv6 configured in the network configuration) In addition several enhancements are made in the IPv6 implementation to better deal with the use (or no-use) of IPv6 - Plugins page The plugins page now loads information in two steps. First the list of plugins is created and next the more time consuming plugin status field is retrieved in the background. The result is a faster loading plugins page, especially when you have a lot of plugins installed - Dashboard graphs The dashboard has now two graphs available. The CPU graph is displayed by default, while the NETWORK graph is a new option under Interface (see the 'General Info' selection) The CPU graph may be hidden as well in case it is not desired Both graphs have a configurable time-line, which is by default 30 seconds and can be changed independently for each graph to see a longer or shorter history. Graphs are updated in real-time and are useful to observe the behavior of the server under different circumstances
    4 points
  2. I really hope "multiple arrays" will win, as array is a core feature of Unraid, and a plugin already allow ZFS on Unraid. In addition, ZFS almost recommand lot of RAM and ECC one whereas Unraid philosophy is to run on whatever hardware you have.
    3 points
  3. This is a bug. I can reproduce the issue and working on a fix. Made a fix, should come in rc2
    2 points
  4. Bumping this because I see it's not fixed in the (otherwise excellent and stable!) 6.10.0-rc1
    2 points
  5. VM erstellen Windows Home / Pro ISO über das Media Creation Tool oder die Windows Enterprise ISO über UUP Dump erstellen. Settings > VM Manager > aktuellstes "Default Windows VirtIO driver ISO" auswählen und herunterladen Optional: Wer eine Grafikkarte durchschleifen möchte (nicht iGPU!): Tools > System Devices > Alle Einträge (VGA, Audio, USB, etc) der Grafikkarte anklicken und an VFIO binden > Unraid Server neu starten Optional: Wer nicht der CPU die Last-Verteilung überlassen möchte, der isoliert über Settings > CPU Pinning > CPU Isolation die Kerne der VM VMS > Add VM > Windows 10 Entweder: Alle Kerne auswählen und der CPU die Last-Verteilung überlassen, Oder: Die isolierten Kerne zuweisen 4096MB RAM, min und max Werte gleich, da unterschiedliche Werte zu Problemen führen können (2GB sind das offizielle Minimum) aktuellste Q35 als Machine, weil es von Intel GVT-g empfohlen wird. Info: Bei Windows 10 brauchte ich Q35-5.1, damit ich den Netzwerktreiber zum Laufen bringen konnte, ansonsten gab es den Fehler Code 56. über "OS Install ISO" die Windows ISO-Datei auswählen 32G vdisk oder größer (32G ist mittlerweile das offizielle Minimum, früher waren es 20G). Hinweis: vdisk.img sind Sparse-Dateien und belegen daher auf dem Datenträger weniger als angezeigt. Man muss aber was tun, damit das so bleibt. VNC Graphics Card auf German stellen Optional: Grafikkarte über das Plus-Symbol hinzufügen Optional: Sound Card auswählen, wer lokal am Server Lautsprecher / Kopfhörer anschließen möchte, bei Grafikkarten den Audio Controller der Grafikkarte auswählen Network Model: Wer keine Docker im "br0" Netzwerk verwendet, sollte für die bessere Performance "virtio" wählen, da "virtio-net" deutlich langsamer ist. Info: Ab Unraid 6.12.4 soll man das Bridge-Netzwerk deaktivieren. In dem Fall wählt man "vhost0" und "virtio" bei einer VM. Optional: Haken bei "Start VM after creation" raus und über GVT-g der VM eine vGPU zuweisen VM erstellen Optional: Über das GVT-g Plugin eine vGPU zuweisen und die VM starten Installation VMS > VM Logo > VNC Remote Wenn man "Press any Key" verpasst hat, dann einfach "reset" in der UEFI Shell eingeben um neu zu starten Am linken Rand "Serverseitiges Skalieren" aktivieren Benutzerdefinierte Installation > Treiber laden > Virtio CD Laufwerk > amd64\w10\ auswählen um den SCSI Controller Treiber für die virtuelle Festplatte zu laden Windows 11: Wer bei dem folgenden Bildschirm hängt, drückt SHIFT + F10, gibt "OOBE\BYPASSNRO" ein und bestätigt den Befehl mit ENTER. Die VM startet neu und nach Auswahl des Tastaturlayouts könnt ihr mit "Ich habe kein Internet" fortfahren. Nach der Installation Optional: Wer möchte aktiviert den Ruhezustand (Hibernate), damit er die VM über das Unraid Menü nicht nur herunterfahren kann. Dazu das Windows Logo klicken > "cmd" eintippen > Rechte Maustaste als Administrator ausführen: powercfg.exe /hibernate on powercfg /h /type full Rechte Maustaste aufs Windows Logo > Ausführen > powercfg.cpl Dann "Auswählen was beim Drücken..." > "Einige Einstellungen sind..." > Schnellstart deaktivieren und wer sich für den Ruhezustand entschieden hat, diesen einschalten Der Schnellstart muss deaktiviert werden, da es sonst zu Problemen kommt, falls man zB die Anzahl der CPU Kerne verändert etc Über das Virtio CD Laufwerk die virtio-win-gt-x64.msi ausführen, was die folgenden Treiber installiert: Balloon, Network, Pvpanic, Qemufwcfg, Qemupciserial, Vioinput, Viorng, Vioscsi, Vioserial, Viostor, Viofs Erst jetzt hat also die VM Internet Über das VirtIO CD Laufwerk die virtio-win-guest-tools ausführen, welches dann auch den VNC Grafiktreiber installiert, so dass wir nun ebenfalls die Auflösung ändern können. Dadurch können wir die VM nun über das Unraid Menü bequem Herunterfahren (Stop) oder in den Ruhezustand (Hibernate) versetzen: Unten rechts rechte Maustaste auf das Netzwerk-Symbol > "Netzwerk-..." > Adapteroptionen ändern > rechte Maustaste Ethernet > Internetprotokoll, Version 4... > Eigenschaften > Feste IP-Adresse vergeben Rechte Maustaste auf das Windows Logo > System > Remotedesktop > Remotedesktop aktivieren Optional: Bei Intel (vGPU) oder Nvidia (Grafikkarte) oder AMD (Grafikkarte) den Treiber herunterladen und installieren Das aktuelle Fenster schließen, auf einem Windows PC nach "Remote..." suchen und "Remotedesktopverbindung" (RDP) öffnen. IP-Adresse und Windows Username hinterlegen. Außerdem unter "Anzeige" die Auflösung anpassen, damit die VM nicht wie euer PC zB in 4K gestartet wird, was eine sehr hohe CPU Last auf dem Server verursachen kann: Hinweis: RDP läuft deutlich flüssiger als NoVNC im Browser und unterstützt auch Sound. Alternativ geht auch Parsec. Optional: PowerShell als Admin öffnen und folgendes ausführen um Windows von Bloatware zu befreien: iwr -useb https://git.io/debloat|iex Optional: Direkter Login-Bildschirm: Rechte Maustaste auf das Windows Logo > Ausführen > regedit KEY_LOCAL_MACHINE > Software > Policies > Microsoft > Rechte Maustaste auf Windows > Neu > Schlüssel > Personalization als Name eingeben > Rechte Maustaste auf Personalization > Neu > DWORD > NoLockScreen > Doppelklick > 1 als Wert > OK Alle Updates installieren (also auch bei Bedarf mehrmals neu starten) Herunterfahren Optional: ISO-Datei und virtio CD Laufwerk aus der VM Konfiguration entfernen Ein Backup von unserer Vanilla Windows vidks1.img erstellen. Das geht über Krusader (Apps), SMB (falls Netzwerkfreigabe vorhanden) oder über das Unraid WebTerminal (">_" oben rechts) mit dem folgenden Befehl (Pfade bei Bedarf anpassen): cp -a --reflink --sparse=auto "/mnt/user/domains/Windows 10/vdisk1.img" "/mnt/user/domains/Windows 10/vdisk1-backup.img" Video
    1 point
  6. I was going to create a new vdisk for the new VM. I don't tend to re-use vdisks.
    1 point
  7. After installing the Intel driver I had the strange issue that my mouse pointer / cursor shows a different position than it really is, but only through NoVNC. Through RDP, it works. The distance shrinks by moving to the left and raises by moving to the right. This means I'm able to click directly on the windows icon on the bottom left corner without having a wrong distance. It wasn't easy to open youtube by this way ^^ But finally I found out that the vGPU does not accelerate video playback through NoVNC. Again it does if I use RDP. EDIT: Ok, solved this as follows (I translated the following from German): - right click on desktop > display - click identify, to be sure which display is used for VNC, in my case the first one - Multiple Displays > Show only on display 1 (it was set to "extend display") Any idea to enable the vGPU to accelerate the browser and desktop while using NoVNC?
    1 point
  8. @PeteAsking, are you planning on static tagging to unifi-controller:version-6.2.26 or are you going to roll the dice on latest?
    1 point
  9. ipvlan seems like an upgrade from macvlan anyways so I didn't even bother with testing my config on macvlan with 6.10.0-rc1. I went right to ipvlan as soon as the update was complete and now at 45 hours with no issue. Generally couldn't make it half that long in 6.9.x when I tried with my configuration before locking up.
    1 point
  10. I think the version is totally upgradable from 5.14 without an issue. I see people on the forums who have done it without any issue at all even though I havent done it myself. Obviously with docker its very easy to just take a copy of the docker files and put it back if anything goes wrong anyway so would recommend just stopping it, copying down the docker files, then upgrading. It does take longer to start up when doing the upgrade as it is updating some files and the database so maybe you have to just give it 5 minutes or so to complete the first time you move to the newer version. And obviously the cosmetic changes are still there which some people dont like but main issue is if you have wifi6 devices (or intend to buy them), you cant escape upgrading. When all is said and done, 5.14 was a really amazing and stable version with a nice interface so the best one can ever hope is to equal it in a future version. Its going to be a while to we see that same level again so just temper your expectations a little if you upgrade (ie the new interface isnt as nice and wotnot but at the same time its certainly usable in a production environment from what I can tell).
    1 point
  11. Hey, I know it's not in the pool but... If you could make the "x" button on notification bigger, or just the space around so when we click near it it close instead of loading the page it came from. That would be awesome! It's really raging, even more when it's a docker notification. Thanks
    1 point
  12. I'm convinced, Just created a my second pool instead of using the unassigned drives.
    1 point
  13. Most of the time it is easier to restore from backups (assuming you have them). if you DO want to sort out a lost+found folder then it is worth mentioning that the Linux ‘file’ command can be used to determine the file type (and thus typically file extension) for files that have lost their names.
    1 point
  14. Bin nun stolzer besitzer eines Supermicro SC836BA-R1K28B 3U Server Gehäuse 16 x 3,5" PWS-1K28P-SQ Passthrough A, der verkäufer ist wirklich sehr nett gewesen. Fehlt dann nurnoch nicht so wichtige sachen wie z.B. CPU, Mainboard und RAM.
    1 point
  15. This happens when 'bzmodules', which is a compressed file system image, cannot be mounted from the USB flash device. Probably the file is corrupted. I suggest restoring your flash backup via the USB Creator tool, which will also ensure the flash file system is not corrupted, boot and try the update again.
    1 point
  16. There's nothing obvious in the system log except Nerdpack is installing down-rev packages. Maybe remove that plugin or try running in 'safe mode'.
    1 point
  17. No, it's not possible: ovmf boots uefi, seabios boots legacy, that's all; you need to convert your uefi installation to legacy. I never tried from uefi to legacy, only legacy to uefi, microsoft has an utility for this and it's straight forward. Try to search in internet, I saw some links pointing to some instructions, so this should be feasible. Make backups first!
    1 point
  18. Well ZFS or not ECC RAM is always a nice to have in the Unraid server. It helps extra against data corruption etc. For a normal PC it isnt really needed. But for a Nas system or a server with important stuff its a very nice thing to have. ECC is an additional feature which adds a new and directly applicable protection, something that does not exist in any capacity without ECC. ECC is always used on hardware RAID cards and by the same measure should be used when implementing software RAID or any other software storage technology which uses RAM as a cache or integrity calculation buffer.
    1 point
  19. As the maintainers bytemark and noodlefighter are not active anymore, I created a new repository and included most of the open pull requests: https://github.com/mgutt/docker-apachewebdav https://hub.docker.com/r/apachewebdav/apachewebdav In addition I changed the look of the WebUI by enabling Icons and switching to an HTML table: With more effort it would be possible to realize something similar to this project: https://github.com/jmlemetayer/abba If anyone is interested in this, then let me know.
    1 point
  20. No - the unraid system freezes only when I shut down the Mac VM. I did provide some logs in the previous page. Nothing on it though.
    1 point
  21. Hi sorry ich bin manchmal bißchen schwer von begriff und nerve auch ab und an mal ein wenig. Das liegt an meiner Schilddrüse und meiner belasteten Psyche in der letzten Zeit. Sieh mir das bitte nach. Ich mach dann das Dateisystem der SSDs auf XFS in Zukunft. Eigentlich hab ich auch ein paar Gb frei gelassen damals beim erstellen der VDISKs, aber das war dann wohl einfach zu wenig. Ich hab die SSDs beide ja gestern schon neu formatiert damit das Dateisystem wieder wie neu ist für Unraid. Die Videos die du mir geschickt hast sind auf jeden Fall schon ganz gut und helfen mir. Vielen Dank dafür. Ich hab jetzt auch nochmal ein anderes gefunden zu der Thematik, dieses hier : und bin grade dabei mir das einzurichten. Bin gespannt
    1 point
  22. I never figured it out, (not really bothered to).. But it's something with the headers on the reverse proxy, to either trust the cloudflare ip's, or to pass the real client ip to the server. Those should help to start looking around! Btw remember to disable anything like brottli, minify, rocket stuff from cloudflare, or you gonna have ui problems, like missing login form etc Sent from my Mi 10 Pro using Tapatalk
    1 point
  23. @paulmorabi A lot of errors in the config.plist When you update the bootloader you should always check the config.plist for added/removed entries, and before rebooting, always use ocvalidate to validate the config.plist. Here was your config.plist verified by ocvalidate: Try the efi I sent in pm and let me know. All is updated and config.plist is fixed-
    1 point
  24. If you are like me annoyed by "Remember me" not working properly and don’t want to authenticate every day on every device, just add a path mapping in redis container like this and it should be fixed Container Path: /bitnami/ Host Path: /mnt/user/appdata/redis/bitnami edit: this path mapping is for bitnami/redis only, other redis container could be different
    1 point
  25. updated to 6.10.0-rc1 and so far so good.
    1 point
  26. DoH! At least it's configurable, ostensibly you could set an exclusion for tower.local https://support.mozilla.org/en-US/kb/firefox-dns-over-https#w_excluding-specific-domains
    1 point
  27. TBH, file system conversions literally scare me to death. You are putting yourself in a position where if anything goes awry during the operation, you lose it all. I think the better approach for folks will be to empty their devices one by one by shifting data to other disks, then reformatting the emptied disks and moving the data back. Do that one by one for each disk in the array. Heck, might even be able to script that operation ;-).
    1 point
  28. With SSL:Yes it will generate and use a self-signed cert and respond on URL: https://<server-name>.<local-tld>/ If a certificate_bundle.pem file is present it will use that cert and also respond on URL: https://<subject>/ where <subject> is the Common Name in the certificate. In other words it will respond on two different URL's using two different certs. If you set SSL:Auto then it will only respond on URL: https://<subject>/ and these URL's all 302-redirect to https://<subject>/ http://<ip-address>/ https://<ip-address>/ http://<server-name> https://<server-name>/ https://<server-name>.<local-tld>/ Of course, if non-standard http and/or https ports are defined, those ports are included in those URL's. Note that with SSL:Auto if your DNS server cannot resolve <subject> then you can be locked out of the webGUI.
    1 point
  29. In your travels through the interwebs, have you run across anyone who has upgraded straight from 5.14.23-ls76 to this current release? Do you have reason to believe that there may be specific intermediate steps that may need to happen for a smooth database upgrade? I know you personally went through a few intermediate versions, but I was hoping to avoid that if possible. Thanks for being our test pilot, I really appreciate your willingness to "take one for the team" if needed with this whole years long saga.
    1 point
  30. @ljm42 Misread logs its not flash backup creating segfault. It may be worth checking if able.
    1 point
  31. I have had no issues and its working as expected.
    1 point
  32. No. Any requirements to expanding a ZFS pool will still hold weight if we implement it, which means a lot more UI programming work for us to make sure the UI respects the rules of ZFS. That being said, there is an active project to make this possible with ZFS on GitHub: https://github.com/openzfs/zfs/pull/8853. Its still in Alpha stage so no idea when it would make it up the stack to be a native part of the project.
    1 point
  33. Hier ein Beispielkommando: https://forums.unraid.net/topic/103655-was-macht-die-kiste-grade-hallo-spindown/?tab=comments#comment-957360 Und hier ist auch einiges zu finden: https://forums.unraid.net/topic/110999-guide-on-how-to-stop-excessive-writes-destroying-your-cache-ssd/
    1 point
  34. Ready for these uber-complicated instructions? Just kidding! It's easy! First you'll need to stop the array, then navigate to the Settings > SMB Settings page. From here, modify the SMB Extras section and add the following: server multi channel support = yes aio read size = 1 aio write size = 1 Save the changes and then start the array. WARNING: THIS IS STILL CONSIDERED EXPERIMENTAL! We haven't done sufficient testing with this yet, so feel free to use it, but do so at your own risk. Something else worth mentioning is that according to the Samba project, as recently as a few days ago Samba 4.15-rc2 was released and there was this interesting note in there about multi-channel: https://wiki.samba.org/index.php/Samba_4.15_Features_added/changed#.22server_multi_channel_support.22_no_longer_experimental
    1 point
  35. Ich sage das jetzt wirklich nur noch 1x: Du hast keine volllaufenden VDisks. Du hast eine VDisk mit einer festen Größe und die ist eben so groß wie du sie anlegst. Wenn sie also nicht die ganze SSD belegen soll, erstelle keine so große VDisk. Häh? Was soll das bringen? Wenn du nur eine SSD hast, dann nimm XFS. Erhöht die Haltbarkeit der SSD. Wenn du zwei SSDs im RAID1 hast, dann geht nur BTRFS. Das kann durchaus an der Größe der VDisk Datei liegen. Wenn das eine billige SSD ist, dann hast du sie quasi zu 100% voll gemacht und damit keine Zellen mehr für Over Provisioning frei. Genau aus dem Grund sind Enterprise SSDs niemals 1TB groß, sondern 960GB oder sogar nur 800GB, damit immer genug OP frei bleibt. Die einzige Lösung ist es daher das Image zu verkleinern. Wenn du doch weißt, dass das Image funktioniert, warum verkleinerst du es dann nicht? Ich hatte doch dazu das Video gepostet. Sobald das Image verkleinert wurde, sollte nach einem erneuten TRIM die VM wieder normal schnell hochfahren, da dann die ganzen Zellen auch wieder für OP frei werden.
    1 point
  36. There is a section on the readme that will help. 0) If you dont have python3 on your unRaid, install it via the nerdtools plugin. 1) find the path for /etc/pihole/ - for me it is /mnt/user/appdata/pihole/pihole/ 2) ssh to your unraid and cd to that directory 3) git clone https://github.com/anudeepND/whitelist.git 4) cd whitelist/scripts 5) ./whitelist.py --dir /mnt/user/appdata/pihole/pihole/ --docker 6) ./referral.sh --dir /mnt/user/appdata/pihole/pihole/ --docker 7) restart pihole
    1 point
  37. No worries. Any VPN with the amount of variables and options like this one is a bit challenging to set up. Feel free to post a screenshot of your settings screen and any errors you are seeing. I will gladly help out. 🙂
    1 point
  38. The NVENC support was what made djaydev's Handbrake docker image special... Losing that build is a huge loss.... Perhaps someone else could fork his work from djaydev/handbrake on github or dockerhub and make it available. I have used Spaceinvaderone's guide to downloading new media to a watch folder, and then sonarr/radarr moves the newly transcoded media to the array. I found a 720p quality setting for TV shows, and a 1080p setting for Movies that I felt like retained the quality, saved a ton of space, but also allows most of our devices to direct play. Many remux 45 min 1080p episodes are about 4gb, but my 720p x264 version averages about 1.2gb. I will be able to continue using djaydev's handbrake since I already had it before it went away. I feel bad for anyone else that wants to use it and can't find it.
    1 point
  39. Can I please ask limetech for some FAQ or written details around the existing users / licenses behaviors / expectations / changes in light of this upgrade? Is really not clear to me who does what, just assumptions at this point. I for one, will opt out from associating any of my licenses with this forum credentials and that's a show stopper for me. I value my privacy and my control more than any social or cloud services and I will not sign my life away down the road to some vague license agreement. Not pointing at limetech but tech history proved over and over again that we're going down a very slippery slope when extend our own devices over some fashion of cloud services. I simply don't trust that model, full, stop, period. This is not what I signed up for and I would like to keep it that way. I can appreciate some of the business benefits explained and I respect that, but also one have to respect our privacy.
    1 point
  40. again i dont understand, then just dont use the feature, dont open the ssl port for your server and it wont be reachable from the url from outside ... and im logged in ... and once used the plugin for the flash backup (cause i personally like the idea). its still only optional as described in my point of view
    1 point
  41. I don't see why this is a privacy concern in any way... The process is super simple and I also used it a few days ago to replace my key and it is way easier to replace your key that way. Btw you had to register with a mail account anyways and this process was more complicated in the past from my perspective. From my perspective this is a super simple way to get your trail key, buy or even replace it. Also please note that there is nothing that communicates constantly with the Forums or better speaking the so much hated "Cloud". If you want to have access to your server and manage it from the Forums so that the stats are displayed in the "My Servers" section on the Forums you have to install the My Servers plugin from the CA App, if you don't want this then don't install it. Just my 2cents.
    1 point
  42. I recommend that you don't uninstall @dee31797's Handbrake docker until we can get an answer from @Djoss whether it will be supported in the future. I'm with you though. Handbrake with NVENC support is my secondary docker application. If you are automating your encodes, you may want to check out Unmanic as that will support hevc-nvenc.
    1 point
  43. You don't need to uninstall if it still works. However you will never be able to reinstall should the need arise. If there's other options available, I'd look at them.
    1 point
  44. Someone please help me understand this? What if I don't wanna associate my keys and my home installs with my unraid forum account? I really like my privacy honestly and what is is my home have no business being associated with any cloud or external servers. This is a very slippery slope and I just don't like it. Is there an option to continue be offline just the way I am right now? I just don't want to be part of this new ecosystem unraid is creating. I trust my privacy and I trust no one, I am sorry..... And there’s absolutely no reason why you can’t support both ways of activating / running unraid. I should not be forced to connect with you if I have a valid key. "UPC and My Servers Plugin The most visible new feature is located in the upper right of the webGUI header. We call this the User Profile Component, or UPC. The UPC allows a user to associate their server(s) and license key(s) with their Unraid Community forum account. Starting with this release, it will be necessary for a new user to either sign-in with existing forum credentials or sign-up, creating a new account via the UPC in order to download a Trial key. All key purchases and upgrades are also handled exclusively via the UPC"
    1 point
  45. The reason it isn't on this list for this poll is for reasons that might not be so obvious. As it stands today, there are really 3 ways to do snapshots on Unraid today (maybe more ;-). One is using btrfs snapshots at the filesystem layer. Another is using simple reflink copies which still relies upon btrfs. Another still is using the tools built into QEMU to do this. Each method has pros and cons. The qemu method is universal as it works on every filesystem we support because it isn't filesystem dependent. Unfortunately it also performs incredibly slow. Btrfs snapshots are really great, but you have to first define subvolumes to use them. It also relies on the fact that the underlying storage is formatted with btrfs. Reflink copies are really easy because they are essentially a smart copy command (just add --reflink to the end of any cp command). Still requires the source/destination to be on btrfs, but it's super fast, storage efficient, and doesn't even require you to have subvolumes defined to make use of it. And with the potential for ZFS, we have yet another option as it too supports snapshots! There are other challenges with snapshots as well, so it's a tougher nut to crack than some other features. Doesn't mean it's not on the roadmap
    1 point
  46. That would be very nice if Unraid would support snapshots for VMs. I would prefer this feature above all others.
    1 point
  47. Not a problem with the release, but just thought I would mention it for other users. NerdPack hasn't been updated to work with this.
    1 point
  48. What Simon said. Basically, as of 6.9.2, some SAS drives (mainly Seagates) and also some SATA drives have an issue with spindown. Essentially they appear to spin down but then immediately spin back up. Not sure yet re the source of this issue, might be kernel/driver related, probably not related to the plugin (as (a) it happens with SATA drives as well and (b) some SAS drives spin down and up perfectly under 6.9.2).
    1 point