Leaderboard

Popular Content

Showing content with the highest reputation on 01/31/22 in Posts

  1. Finally the new Macinabox is ready. Sorry for the delay work has been a F******G B*****D lately taking all my time. So It also has a new template so please make sure to make sure your template is updated too (but will work with the old template) A few new things added. Now has support for Monterey BigSur Catalina Mojave High Sierra. You will see more options in the new template. Now as well as being able to choose the vdisk size for install you can also choose whether the VM is created with a raw or qcow2 (my favorite!) vdisk The latest version of Open core (OpenCore to 0.7.7.) is in this release. I will try and update the container regularly with new versions as they come. However you will notice a new option in the template where you can choose to install with stock opencore (in the container) or you can use a custom one. You can add this in the folder custom_opencore folder in Macinabox appdata folder. You can download versions to put here from https://github.com/thenickdude/KVM-Opencore/releases Choose the .gz version from here and place in the above folder and set template to custom and it will use that (useful if i am slow in updating !! 🤣) - note if set to custom but Macinabox cant find a custom opencore here to unzip in this folder then it will use the stock one. Also there is another option to delete and replace the existing opencore image that your vm is using. Select this to yes and run the container and it will remove the opencore image from the macos version selected in the template and replace with a fresh one. Stock or custom. By default the NICs for the macOS versions are virtio for Monterey and BigSur. Also vDisk Bus is virtio for these too. HighSierra, Mojave and Catalina use a sata vDisk Bus. They also use e1000-82545em for there NICs. The correct NIC type for the "flavour" of OS you choose will automatically be added. However if for any macOS you want to overide the NIC type and install then you can change the default NIC type in the template from Virtio, Virtio-net, e1000-82545em or vmxnet3 By default the NIC for all vms is on <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> This will make the network adapter seem built in and should help with Apple services. Make sure to delete the macinabox helper script before running new macinabox so the new script is put in as there are some changes in that script too. I should be making some other chnages in the next few weeks soon, but thats all for now
    6 points
  2. Ja werbewunder es ist im deutschen unraid forum nun allseits bekannt daß Du Probleme hast. Ein Thread genügt. Danke.
    3 points
  3. Let's revisit once we are past the current issue. At the moment you want the unraid-api stopped anyway.
    2 points
  4. I installed a "virtual monitor" and I was able to change the resolution. I am going to try another dummy plug
    2 points
  5. Um yeah thanks for pointing that out. The problem is, is that Unraid VM manager on an update from making a change in the GUI (not xml edit) will automatically change the nics to be on bus 0x01 hence address to <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> I will take a look at this and perhaps add to the macinabox helper script to fix this.
    2 points
  6. Der Stromverbrauch der AEG Protect NAS USV hat mich doch so gewurmt, dass ich mir noch die kleine Cyberpower C550EPFCLCD bestellt habe. Ich kann bestätigen, die hat tatsächlich nur einen sehr geringen Leistungsbedarf von 4 bis 5 Watt. Ich kann nicht sagen, ob die AEG USV ein Montagsgerät ist, aber die Cyberpower USV kann ich uneingeschränkt empfehlen.
    2 points
  7. After my trusty old NAS started throwing errors (the drives did have a run time of around 7.5 years so they had done okay) I did some emergency backing up to a QNAP I acquired, but ran unRaid on it as a trial. It went so well that I brought Pro within days, and planned a new server to allow me to decommission that NAS and ~12 year old box (my last custom build) that did little more than create noise and draw power, and almost as a secondary function ran Plex) both of which were maxed out for storage space and lacked much transcoding ability. The QNAP had 20TB with all the drives I cobbled into it (most of which were probably approaching failure also), and emptying both the old NAS and server left around 300GB, so that wasn't going to cut it, and it was a rack mount noisy beast that at night had a hum that could be heard in the other end of the house with no virtualisation support! The plan was for a new build to not only expand the storage I had, and up the specs to handle more transcoding, but that also: Was quiet enough to live in the office instead of the garage Would run a Windows VM with GPU passthrough to be a new daily driver for some light gaming (I am otherwise typically a Mac laptop user) Could run home automation bits, Zabbix, and some other dockers which would also allow me to decommission a 3rd computer (linux box) I had running as a bit of management server. Would hopefully use less overall power than those 3 machines combined. This is yet to be tested, but when I started speccing options, my debate was either quiet enough to live in the office and replace all 3 machines, or low power oriented (and maybe noisy) in the garage but requiring another new daily driver; obviously quiet won out. Build updated with additions up to 21-08-19 OS at time of building: unRAID 6.9.2 Pro CPU: Intel Core i9-11900K 3.5GHz 8-Core [Intel Ark | PBTech | Benchmark] CPU Cooler: Noctua NH-D15 82.5 CFM CPU Cooler [Noctua] Motherboard: MSI MAG z590 Torpedo ATX LGA1200 MB [MSI] RAM: Corsair Vengeance RGB Pro 48GB (2 x 16GB, 2 x 8GB) DDR4-3200 CL16 [Corsair] GPU: Gigabyte GeForce GTX 950 2 GB WINDFORCE 2X Video Card (from parts on hand, while the bank balance recovers) [Gigabyte] Case: Fractal Design Define 7 Dark ATX Mid Tower Case [Fractal Design] Power Supply: Corsair HX Platinum 850W 80+ Platinum Certified Fully Modular ATX PSU [Corsair] Fans: 3 stock fans with the case plus 2 on the Noctua. Parity Drive: 1x Western Digital Red Pro 10TB 3.5" 7200RPM [WD] Data Drives: 5x Western Digital Red Pro 10TB 3.5" 7200RPM [WD] Cache Drives: 2x Crucial P2 1TB M.2-2280 NVME SSD [Crucial] Other: Fan Controller: iCue Commander Pro [Corsair] (Comments on my experience on this in a post below) In-Case Lighting: iCUE LS100 Smart Lighting Strip Starter Kit [Corsair] Primary Use: Plex and friends (Radarr, Sonarr, qBittorrent, SABnzbd, Overseerr, Gaps), general file store/backup, and Windows gaming (lite) VM (Windows VM shut down while I'm preclearing some disks that its storage is being moved to, but otherwise It's been running well) Likes: Very quiet, should have plenty of power to experiment with. And I like the look. Also, I was worried that the z590 and 19-11900K might be too new to be well supported by unRaid, so I quite like that it actually works!! Dislikes: Other than my cabling skills to work with the available space, not a thing! Actually, it's just that as I've added things, I've not shut down and cabled nicely, I've tried to keep downtime minimum at the cost of tidiness. Future Plans: [Done] Get hardware transcoding going [Done] Trying to get a Win10 VM with GPU passthrough going (Result: It worked easily and well. I did have to set power settings to not turn off otherwise I'd have to use other machine to power it on from Unraid again which makes sense but I didn't consider first boot) [Still todo] Also getting back to Home Automation [Done] Need to see how the temp's go and determine if more cooling is required (Result: More not "needed" but looking to add some for redundancy and future proofing) [Still todo] Probably move my website back to internal hosting [In Progress] Lighting has never really been my thing, but this case was cheaper with the glass, so perhaps I'll get some more RGB fans or something to brighten it up (Update: Lighting Strips don't position particularly well around the edges of the case. Still playing with layouts that hide the lights but light the case. More to be done, possibly use more top-of-case RGB fans to assist) [New/In Progress] Good authenticated/authorised external access portal to expose the likes of Overseer to specific friends. [New/todo] Setup mail services so I can move all my accounts from GSuite/Office365 (I have a number of accounts) possibly based on Mailcow [New/todo] Migrate my old Zabbix config and get it running in docker form [New/todo] Start researching expansion cards. I'm filling space fast and don't have any more usable on-board SATA, so expansion will require a card and that's an area I'm just not familiar enough in yet so I expect more research than I'd really like 🤣 Power Consumption: (Still needs to be both measured better, and updated for latest additions) Boot (peak): 156W (very briefly. Mostly about 123W) Idle (avg sample): 97W with drives running, 82W with 2 drives sleeping (3rd was too active to get a decent read with it down). Both with 13 various docker containers running. Active (avg sample): 130W to 250W, average around 170W running a Windows 10 VM with GPU passthrough, playing Starcraft 2. Light use (avg sample): Samples between 82W and 130W, mostly around 92W watching a movie with Plex. Measured with a rather crappy TP-Link smart plug so subject to my poor sampling.
    1 point
  8. Hi folks, after spending a fair bit of time hardening my SMB configuration I figured I'd write a quick guide on what I consider the best settings for the security of an SMB server running on Unraid 6.9.2. First, before we get into SMB settings, you may also want to consider hardening the data while it is at rest by specifying an encrypted file-system type for your array (although this isn't a share specific option). For SMB, first set the SMB settings available: I've settled on this as the following block is what I consider to be a hardened SMB configuration for a standalone server that is not domain joined or using Kerberos authentication: server min protocol = SMB3_11 client ipc min protocol = SMB3_11 client signing = mandatory server signing = mandatory client ipc signing = mandatory client NTLMv2 auth = yes smb encrypt = required restrict anonymous = 2 null passwords = No raw NTLMv2 auth = no This configuration block is to be entered into the SMB extras configuration section of the SMB settings page. These settings will break compatibility with legacy clients, but when I say legacy I'm talking like Windows Server 2003/XP. Windows 10+ clients should work without issue as they all support (but are not necessarily configured to REQUIRE) these security features. These settings force the following security options: All communications must occur via SMB v3.1.1 All communications force the use of signing for communications NTLMv2 authentication is required, LanMan authentication is implicitly disabled. All communications must be encrypted Anonymous access is disabled Null session access is disabled NTLMSSP is required for all NTLMv2 authentication attempts In addition, the following security settings are configured for each available share: Also ensure that you create a non-root user to access the shares with and that all accounts use strong passwords (Ideally 12+ complex characters). Finally, a couple of things to note: If you read the release notes for Unraid 6.9.2, you'll see that Unraid uses samba: version 4.12.14. This is extremely important. If you, like me, google SMB configuration settings you'll eventually come across the documentation for the current version of SMB. But! Unraid is not running the latest version, and that's extremely important. The correct documentation to follow is for the 4.12 branch of Samba and the configuration options are significantly different, enough that a valid config for 4.15 will not work for 4.12. With "null passwords = No" you must enable Secure or Private security modes on each exported Unraid share - guest access won't work. There is currently no way to add per-share custom smb.conf settings. So either the server gets hardened or it does not. Do not apply a [share_name] tag as it will not work. It is not possible to specify `client smb3 encryption algorithms` in version 4.12.x of Samba. Kerberos authentication and domain authentication may be preferable in other circumstances, in this instance, additional hardening options may be considered. If you, like me, use VLC media player on mobile devices, you may find that SMBv3 with encryption makes the host inaccessible on IOS devices. The VLC team is aware of this and there is a fix available if you have the bleeding edge/development version of the app, but not if you download the current store version (last I checked, the fix hadn't been released). Should work fine with Android/Windows VLC. If you have any suggestions for other options that I have not included here or that you think are a mistake. Please let me know and I'd be most happy to look into them and adjust. Some other quick hardening suggestions for unraid hardening in general. Disable whatever services you don't need. In my case, that means I: Disable NFS Disable FTP Disable 'Start APC UPS daemon' If you enable Syslog, also enable NTP and configure it. Disable Docker Quick note on docker, having the services enabled allows for 'ip forwarding' which could, in theory, be used to route traffic via the host to bypass firewall rules (depending on your network toplogy obviously) Hope that helps someone else out there. Cheers!
    1 point
  9. Let us know if you self-host a Project Zomboid server here!
    1 point
  10. NEVERMIND. I will be judged. Restarting the hub and router fixed the issue. I shall go hide in a corner now...
    1 point
  11. I've been using My Servers since it released, and this entire time I thought this thread was the only section for posting about it. I feel so dumb right now.
    1 point
  12. I would say to use FileIO when using a disk that contains data, and you don't have any other disk that you can spare for the function of ISCSI. I would go for block when you are able to have a complete drive available for this function. My case: I have both, 1 4TB HDD completely available to Windows, and a 300GB image on my SSD for game storage (for VM and my gaming rig).
    1 point
  13. For the disk issue see here, NVMe issue is unrelated, it dropped offline: Jan 31 17:46:20 Unraid kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 Jan 31 17:46:20 Unraid kernel: nvme 0000:06:00.0: enabling device (0000 -> 0002) Jan 31 17:46:20 Unraid kernel: nvme nvme0: Removing after probe failure status: -19 Look for a BIOS update, this can also help sometimes: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
    1 point
  14. From the terminal, unraid-api stop should fix you up
    1 point
  15. Ah, I was wondering where those logs were going, thanks! I have never been able to get snapshots to work correctly. When I first set this up, I had trouble with it, so you suggested that I disable that feature and just let it run full backups, which has been working fine up until recently. Looking at the logs, I think I see maybe the issue. It appears that if a machine is suspended, rather than running or shut down, it gets confused and doesn't know what to do, so it errors out. This isn't a problem, since I just have to not suspend my VMs, no biggie. My next backup is scheduled to run at 2am tomorrow morning, so we'll see how it goes with all machines running. I expect it'll work fine, since I ran a manual backup the other day and it didn't have any problem shutting down and restarting the VMs as needed. Great work on this plugin, BTW. It's giving me a little peace of mind that I won't mess around an break something too badly. lol Now, if we just had in-GUI snapshot management, like in VMWare, now THAT would be killer....
    1 point
  16. can be several issues, most liekely even too many entries in the database from too many tries some approaches here
    1 point
  17. ...den Ordner "/UNRAID" sollte es auch nicht geben. Was sagt ein "mount" auf der Kommanozeile?
    1 point
  18. I understand, no problem. 😁 Thanks for the help/hints before, gives me ideas and a simple solution anyway.
    1 point
  19. bingo bango found it...for those playing at home you can edit the .php file 'maximal_upload_size' =>
    1 point
  20. Die Datei ablegen unter /boot/extra Dann wird diese beim Start installiert Im laufenden Betrieb installpkg /Pfad/.... Gesendet von meinem SM-G981B mit Tapatalk
    1 point
  21. Ich würde den Hersteller ansprechen: https://unraid.net/contact Teile denen bitte die E-Mail Adresse mit, unter der Deine Unraid Lizenz läuft. Solange solche Probleme mit dem MyServers Plugin existieren, betrachte ich das Ganze als Beta:
    1 point
  22. Ah ok, again what learned Thanks for clearification. I am using it for matrix, with a secret and the default key paths of the container. Or should I delete the default paths prefilled in the template to have certs created? Edit: checked appdata and there are certs/keys, so looks good I guess..
    1 point
  23. I think there is some misunderstanding how this is working. STUN/TURN uses it's own protocols and if you proxy it through http/https you introduce maybe other issues because the STUN/TURN is a proprietary protocol and is different from http/https and uses it's own encryption, that's why the container is also creating certificates that are then used to encrypt the traffic. It uses it's own encryption and you don't have to proxy it. I think you are using your STUN/TURN server for Nextcloud Talk or am I wrong? If you are using it, you have to enter the Shared Secret and that is what is used to encrypt the traffic. Hope that makes sense to you.
    1 point
  24. Als ich das mal gemacht hatte, wurde gnadenlos alles gelöscht. Ich muss gestehen, dass ich die Übersichtsseiten von NERD und DEV Plugin nie verstanden habe. Sie gruselen mich einfach nur. Deshalb lösche ich nicht mehr benötigte Plugins von Hand vom Flash und warte auf den nächsten notwendigen Restart des Servers bis sie weg sind. Hast Du die Seite und die Auswirkungen der verschiedenen Aktivitäten wirklich in Gänze verstanden? Respekt.
    1 point
  25. oben noch den Haken uninstall setzen dass es direkt deinstalliert wird
    1 point
  26. Looked weird, but Squid had an explanation in the above answer🙂
    1 point
  27. Thanks for the link, had missed that information. Then I can safely installera the socker container.
    1 point
  28. Unraid shows a container as stopped if it shuts down before timeout. Normally a docker will shutdown gracefully, but if for some reason it doesn't, the timeout gives you a way to force shutdown without you needing to manually intervene.
    1 point
  29. 1 point
  30. The issue in the original post is solved. The issue you're all having is unrelated, it's related directly to the cloud issues we're currently having.
    1 point
  31. Even though it is empty just the presence of the domains folder could cause that warning so delete the empty folder.
    1 point
  32. Those 2 "references" are actually identical and are because of a subtle change of how things are organized at lsio https://www.linuxserver.io/blog/wrap-up-warm-for-the-winter
    1 point
  33. Diese Meldung kommt bei mir sporadisch beim VM Start. Ich muss dann immer die VM Konfiguration löschen und exakt identisch neu einrichten. Die Image Datei lösche ich dabei natürlich nicht. Klappt bei mir zu 100%.
    1 point
  34. Jan 29 04:47:51 Infinity kernel: mce: [Hardware Error]: Machine check events logged Jan 29 04:47:51 Infinity kernel: [Hardware Error]: Corrected error, no action required. Jan 29 04:47:51 Infinity kernel: [Hardware Error]: CPU:1 (19:21:2) MC11_STATUS[Over|CE|-|AddrV|PCC|-|CECC|-|Poison|-]: 0xd7894800017d60c0 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: Error Addr: 0x0000000000000000 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: IPID: 0x0000000000000000 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: L3 Cache Ext. Error Code: 61 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: cache level: RESV, tx: INSN Looks like it's a typical Ryzen MCE and nothing to worry about.
    1 point
  35. Please use the dedicated support section of the forum : https://forums.unraid.net/forum/94-my-servers-plugin-support/ Support on Announcements threads are messy and confusing for everyone involved.
    1 point
  36. I heared a rumor, that it will be out, as soon as Ed will finish working on it absolutely free of charge, voluntarily in his free time.
    1 point
  37. I tried this since yesterday aswell. The Tdarr Server Logs will tell you that it expected a different Node ID (the name you set). I think this is because even if you change the Node Port in unraids docker template, the config file in /app/configs still sets the Node Port to 8267. The Problem is, if I change the Port to the same as the Port I set in the template the Server cant contact the Node anymore. Now I finally got it working by using the "internal docker IPs". Probably not the cleanest way, but it works for me for now. This is how my templates are looking https://imgur.com/a/nOs1V3P
    1 point
  38. What I did to solve the problem - Terminal, cd to the config folder nano motioneye.conf input random text and save, to make nano create the file. Remove random text and save, to leave an empty file Start docker. Below is my log output for the docker, first before I did the above, then after: ---Before--- CRITICAL:root:failed to read settings from "/etc/motioneye/motioneye.conf": [Errno 2] No such file or directory: '/etc/motioneye/motioneye.conf' -----After-------- INFO: hello! this is motionEye server 0.42 INFO: hello! this is motionEye server 0.42 INFO: main config file /etc/motioneye/motion.conf does not exist, using default values INFO: cleanup started INFO: wsswitch started INFO: tasks started INFO: mjpg client garbage collector started INFO: server started
    1 point
  39. I am unsing UPnP on my Fritzbox, but i stuck there too.
    1 point
  40. Zu dem nachträglichen Bearbeiten der Tags für bereits im System befindliche Dokumente: Natürlich kann man Dokumente auch nachträglich weiter MANUELL klassifizieren auch neue Tags definieren usw... (s.o.) Die Frage ist aber, ob eben auch für alte Dokumente eine Neuklassifizierung automatisch erzeugt werden könnte, unter zu Hilfename des neuronalen Netzwerks, was sich hinter dem Algorithmus 'Auto' verbirgt. Ich könnte mir vorstellen, das in diese Richtung die Frage von Smolo geht ?! Man müsste mal einen Test machen mit doch mindestens 500 Dokumenten im System: Beispiel - man hat den Tag 'Zahnarzt' bereits vergeben im system Ausserdem den Tag 'Rechnung' Nun geht man in die Konfiguration unter Tags und definiert einen neuen Tag 'Zahnarztrechnung' mit dem Algorithmus 'Auto' ohne Dokumente manuell mit dem Tag zu versehen. Wahrscheinlich wird das System nichts veränderen auch nicht nachdem der Docker ein paar Stunden läuft, aber das müsste man ausprobieren... Ich habe nämlich irgendwo gelesen, daß der Algorithmus ständig aktiv ist und regelmässig irgendwas verfeint ?! Weiss aber nicht mehr wo ich das gelesen habe ... Interessant ist, wenn man eine bereits verarbeitetes Dokument nochmals durch den 'consume' ordner ins system einfügt (ein Doppel) - es könnte sein, dass das Doppel/Neueingabe jetzt anders als im 'ersten Durchlauf' zusätzlich den Tag Zahnarztrechnung bekommen hat... eben mit dem Auto Ordner. Wenn das alte Dokument den neuen Tag nicht hat, wissen wir, dass der Algorithmus wirklich NUR für Neueingaben läuft. Das ist für mich aus der Dokumentation nicht ganz sonnenklar... Hier geht offenbar 'Probieren über Studieren' - mit dem Lesen der Doc ist es da (bei mir jedenfalls) nicht getan - aber ich muss hier auch noch einiges Testen, bevor ich mehr dazu sagen kann... die Eingabe des Archivs mach einfach noch zu viel Arbeit (alte Handschriften müssen noch erst eine Abschrift bekommen für die OCR Erkennung und weiteres mehr...) Also evtl. bitte mal selbst probieren.
    1 point
  41. Today I released in collaboration with @steini84 a update from the ZFS plugin (v2.0.0) to modernize the plugin and switch from unRAID version detection to Kernel version detection and a general overhaul from the plugin. When you update the plugin from v1.2.2 to v2.0.0 the plugin will delete the "old" package for ZFS and pull down the new ZFS package (about 45MB). Please wait until the download is finished and the "DONE" button is displayed, please don't click the red "X" button! After it finishes you can use your Server and ZFS as usual and you don't need to take any further steps like rebooting or anything else. The new version from the plugin also includes the Plugin Update Helper which will download packages for plugins before you reboot when you are upgrading your unRAID version and will notify you when it's safe to reboot: The new version from the plugin now will also check on each reboot if there is a newer version for ZFS available, download it and install it (the update check is by default activated). If you want to disable this feature simply run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have disabled this feature already and you want to enable it run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature needs an active internet connection on boot. If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS. It is now also possible to install unstable packages from ZFS if unstable packages are available (this is turned off by default). If you want to enable this feature simply run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have enabled this feature already and you want to disable it run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended). Please also keep in mind that for every new unRAID version ZFS has to be compiled. I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process. Currently the process is fully automated for all plugins who need packages for each individual Kernel version. The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation. If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.
    1 point
  42. HOW TO RESTORE!!!! Option 1- Script: Use the great script from @petchav, many thanks! See the video below for a guide on how to use this. Option 2- Manual restore: You will need: 1. Your backup .img file (after extraction) 2. Backed up XML file 3. Your backed up .fd file Step 1- In terminal, extract img file from .zst backup. Example below. Replace with your .zst file name. zstd -d -C --no-check 20211114_0300_vdisk1.img.zst You will likely get the error below IF you run the command above without --no-check Decoding error (36) : Restored data doesn't match checksum *Note, -no-check option MAY NOT be supported by Unraid. If this doesn't work in terminal, try Cygwin (Windows) and run it. https://www.cygwin.com. Place your .zst file in the c:\Cygwin folder. With Windows you can now also install a Linux environment as well. (Beyond scope of this guide!) Copy the backup to the local machine or it will take forever! ALSO- you can backup WITHOUT compression and save yourself some grief!!! (See options) Step 2- Place .img file back into directory where it was backed up from. Look in the backed up XML file for the following line and ensure that YOUR path AND files exist: Step 3- Place backed up .fd file to: (file name may vary) /etc/libvirt/qemu/nvram/4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd You may need to remove the 14 character timestamp at the start of the filename!! For example, remove 20211205_0300_ 20211205_0300_4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd Step 5- verify ALL files are where they were BEFORE the backup. Look at XML file for file locations. Step 6- Open your .xml file. Copy the contents. Don't mess with it! Step 7- Create a new VM in Unraid. When asked, it does NOT matter what type, (Win11, 10 etcetera..) just pick one. Once you do that and BEFORE hitting "Create", you will have the VM options page. Select the "Form View" button at top right of screen. Change to XML view. Select all contents and delete. Paste contents of backed up XML in. Select create and away you go!
    1 point
  43. Another update. Doing some more digging, specifically on netapp hgst drives. I found that on the ds4486, you have to run a specific parameter to get the smart test info. smartctl -a -d sat /dev/your_disk This worked when I ran the test on a disk from the console. Great information. I see there is an option in the individual disks in unassigned devices when you click on the disks. Smart Controller Type --> SAT Then when you choose SAT it provides two additional fields, which I am unsure of what to put as this is, again, new to me. If I leave auto blank and choose 12, it seems to show the smart data. What are the 12 and 16 numbers representing? Any help on this one? Edit: the temps are reported in the smart test, but not on the unraid dash/gui. I believe this is likely the issue of smartmontools and link I provided earlier. I think when the next release comes out it will fix this as it seems they have smartmontools 7.2 for the next release.
    1 point
  44. @Squid Hello, you should definitely recheck this app. As you can see, it does not work and the dev doesn't even bother to answer. In fact, I even wonder how it could be accepted in the "Appstore" of unRAID, at first.
    1 point
  45. The shutdown option is now available in the version of the Parity Check Tuning plugin i released today. I would be interested in any feedback on how I have implemented it or any issues found trying to use the shutdown feature.
    1 point
  46. Setting MakeMKV to run as root user fixed the drive not being detected for me. Use PGID and PUID as 0.
    1 point
  47. I realize this is a serious necropost, but wanted to give @maciekish a huge thank you for sticking with this after the responses he got. I'm using a reverse proxy with Nginx and had the same problem and this lead me to the same solution. So for anybody getting here from Google you'll want to add the following to your Nginx config to get things working again. You could be clever and only apply it to the locations that are broken, but since my reverse proxy is not even exposed outside my network I just disabled gzip for the whole server definition: server { gzip off; ... } As an FYI to all the doubters above, if they still hold their positions. A reverse proxy is a very handy way to make it so you don't have to remember unique port numbers of all your internal services. http://unraid.home.local or http://sonarr.home.local are totally valid internal domains if your DNS is setup right and will hint the server to your password database tool of choice so you don't have it offering up 30 passwords because everything is on the same IP address of the server.
    1 point
  48. For future reference the issue is due to "buffering" in gzip in Caddy. Workaround: gzip { not /plugins }
    1 point
  49. Yes, you can do a New Config with retain ALL. Then select parity is already valid when starting the array. No ill effects.
    1 point
  50. unRAID's parity drive provides a lot of protection. But some failure scenarios actually corrupt parity - like a dying drive spewing junk. Doesn't matter if you have one parity or a 1000, this behavior will corrupt all parities and make parity recovery impossible. And when you are talking about low probabilities like dual drive failure - things like flood, hardware failure (e.g. PSU faliure causing spike), fire, theft, vandalism, and accident need to be considered. These are things that can early knock out all or a good percentage of the drives. Tri-parity+ does not help maybe as much as pictures of your hardware so you could file an insurance claim! Now I get it, a lot of people have media on their servers. And although any one (or even 100) media files might be recreatable, thousands+ are not. And since it is not economic for some of us to keep a large backup set, anything we can do to raise the probably of not loosing our data is worthwhile. In that spirit, with a large array, an extra parity can make sense. But I did a back of the napkins analysis and figured single parity would protect you some 95% of the time. And that dual parity would protect you maybe 0.5% extra. (And that was based on a 100% chance of a disk dying every year). A third parity, you are in the hundreths or thousandths of a percent advantage. Is that worth the price of a drive as large or larger than your largest drive? Two parities can really help in one case - with users that make mistakes. A shot in the foot or a poorly connected cable while trying to recover from one failure is hugely more likely than a second physical failure. That's why I recommend dual parity for new users, at least til they learn the ropes. I also recommend hot-swap cages to all but eliminate cable issues (the highest risk for data loss!) Another important thing to remember is that unlike RAID, losing 2 drives with 1 parity results in losing only 1 or 2 disks worth of data. So even in a large array, the scope of your data loss is limited. In an equivalent RAID5 array, you'd loose the entire array. So while data loss in RAID5 vs data loss in single parity, although equally likely, the impact is hugely more for RAID5. With the tools that unRAID gives you to put the array back together and say abracadabra, parity is valid again, many situations become recoverable even if not 100%. When RFS was popular, we found it to be very good at recovering from data corruption after a partial recovery and salvaging the lion's share of a disk even in less than optimal recovery conditions. Unfortunately RFS is now a very poor choice, and XFS is the most popular alternative. But XFS does not provide good recovery from corruption, so unfortunately that hurts our ability to recover. (@c3, maybe you can fix this in your spare time!) Don't know about BTRFS - if it is good at recovery from corruption, that might be a better long term choice. Would 3rd, 4th, and nth parity be useful to some. Probably not, although a lot of users would think it would be and put those protections in place. But if instead of loading more parities, if people used the extra disks for a backup and slipped into a safety deposity box, they'd have protected themselves 1000x more than the extra parity would protect them. And everyone should be using hot-swap bays to avoid cabling problems. If we focused on the real risks, and dropped the panacea that the real problem is drive failures, we'd all be more protected!
    1 point