Leaderboard

Popular Content

Showing content with the highest reputation on 01/31/22 in all areas

  1. Finally the new Macinabox is ready. Sorry for the delay work has been a F******G B*****D lately taking all my time. So It also has a new template so please make sure to make sure your template is updated too (but will work with the old template) A few new things added. Now has support for Monterey BigSur Catalina Mojave High Sierra. You will see more options in the new template. Now as well as being able to choose the vdisk size for install you can also choose whether the VM is created with a raw or qcow2 (my favorite!) vdisk The latest version of Open core (OpenCore to 0.7.7.) is in this release. I will try and update the container regularly with new versions as they come. However you will notice a new option in the template where you can choose to install with stock opencore (in the container) or you can use a custom one. You can add this in the folder custom_opencore folder in Macinabox appdata folder. You can download versions to put here from https://github.com/thenickdude/KVM-Opencore/releases Choose the .gz version from here and place in the above folder and set template to custom and it will use that (useful if i am slow in updating !! 🤣) - note if set to custom but Macinabox cant find a custom opencore here to unzip in this folder then it will use the stock one. Also there is another option to delete and replace the existing opencore image that your vm is using. Select this to yes and run the container and it will remove the opencore image from the macos version selected in the template and replace with a fresh one. Stock or custom. By default the NICs for the macOS versions are virtio for Monterey and BigSur. Also vDisk Bus is virtio for these too. HighSierra, Mojave and Catalina use a sata vDisk Bus. They also use e1000-82545em for there NICs. The correct NIC type for the "flavour" of OS you choose will automatically be added. However if for any macOS you want to overide the NIC type and install then you can change the default NIC type in the template from Virtio, Virtio-net, e1000-82545em or vmxnet3 By default the NIC for all vms is on <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> This will make the network adapter seem built in and should help with Apple services. Make sure to delete the macinabox helper script before running new macinabox so the new script is put in as there are some changes in that script too. I should be making some other chnages in the next few weeks soon, but thats all for now
    6 points
  2. Ja werbewunder es ist im deutschen unraid forum nun allseits bekannt daß Du Probleme hast. Ein Thread genügt. Danke.
    3 points
  3. Let's revisit once we are past the current issue. At the moment you want the unraid-api stopped anyway.
    2 points
  4. I installed a "virtual monitor" and I was able to change the resolution. I am going to try another dummy plug
    2 points
  5. Um yeah thanks for pointing that out. The problem is, is that Unraid VM manager on an update from making a change in the GUI (not xml edit) will automatically change the nics to be on bus 0x01 hence address to <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> I will take a look at this and perhaps add to the macinabox helper script to fix this.
    2 points
  6. Yup. My bad. I had to edit the URL slightly (add the -) after posting it. Let the spanking begin.
    2 points
  7. Der Stromverbrauch der AEG Protect NAS USV hat mich doch so gewurmt, dass ich mir noch die kleine Cyberpower C550EPFCLCD bestellt habe. Ich kann bestätigen, die hat tatsächlich nur einen sehr geringen Leistungsbedarf von 4 bis 5 Watt. Ich kann nicht sagen, ob die AEG USV ein Montagsgerät ist, aber die Cyberpower USV kann ich uneingeschränkt empfehlen.
    2 points
  8. Hi folks, after spending a fair bit of time hardening my SMB configuration I figured I'd write a quick guide on what I consider the best settings for the security of an SMB server running on Unraid 6.9.2. First, before we get into SMB settings, you may also want to consider hardening the data while it is at rest by specifying an encrypted file-system type for your array (although this isn't a share specific option). For SMB, first set the SMB settings available: I've settled on this as the following block is what I consider to be a hardened SMB configuration for a standalone server that is not domain joined or using Kerberos authentication: server min protocol = SMB3_11 client ipc min protocol = SMB3_11 client signing = mandatory server signing = mandatory client ipc signing = mandatory client NTLMv2 auth = yes smb encrypt = required restrict anonymous = 2 null passwords = No raw NTLMv2 auth = no This configuration block is to be entered into the SMB extras configuration section of the SMB settings page. These settings will break compatibility with legacy clients, but when I say legacy I'm talking like Windows Server 2003/XP. Windows 10+ clients should work without issue as they all support (but are not necessarily configured to REQUIRE) these security features. These settings force the following security options: All communications must occur via SMB v3.1.1 All communications force the use of signing for communications NTLMv2 authentication is required, LanMan authentication is implicitly disabled. All communications must be encrypted Anonymous access is disabled Null session access is disabled NTLMSSP is required for all NTLMv2 authentication attempts In addition, the following security settings are configured for each available share: Also ensure that you create a non-root user to access the shares with and that all accounts use strong passwords (Ideally 12+ complex characters). Finally, a couple of things to note: If you read the release notes for Unraid 6.9.2, you'll see that Unraid uses samba: version 4.12.14. This is extremely important. If you, like me, google SMB configuration settings you'll eventually come across the documentation for the current version of SMB. But! Unraid is not running the latest version, and that's extremely important. The correct documentation to follow is for the 4.12 branch of Samba and the configuration options are significantly different, enough that a valid config for 4.15 will not work for 4.12. With "null passwords = No" you must enable Secure or Private security modes on each exported Unraid share - guest access won't work. There is currently no way to add per-share custom smb.conf settings. So either the server gets hardened or it does not. Do not apply a [share_name] tag as it will not work. It is not possible to specify `client smb3 encryption algorithms` in version 4.12.x of Samba. Kerberos authentication and domain authentication may be preferable in other circumstances, in this instance, additional hardening options may be considered. If you, like me, use VLC media player on mobile devices, you may find that SMBv3 with encryption makes the host inaccessible on IOS devices. The VLC team is aware of this and there is a fix available if you have the bleeding edge/development version of the app, but not if you download the current store version (last I checked, the fix hadn't been released). Should work fine with Android/Windows VLC. If you have any suggestions for other options that I have not included here or that you think are a mistake. Please let me know and I'd be most happy to look into them and adjust. Some other quick hardening suggestions for unraid hardening in general. Disable whatever services you don't need. In my case, that means I: Disable NFS Disable FTP Disable 'Start APC UPS daemon' If you enable Syslog, also enable NTP and configure it. Disable Docker Quick note on docker, having the services enabled allows for 'ip forwarding' which could, in theory, be used to route traffic via the host to bypass firewall rules (depending on your network toplogy obviously) Hope that helps someone else out there. Cheers!
    1 point
  9. Let us know if you self-host a Project Zomboid server here!
    1 point
  10. NEVERMIND. I will be judged. Restarting the hub and router fixed the issue. I shall go hide in a corner now...
    1 point
  11. I've been using My Servers since it released, and this entire time I thought this thread was the only section for posting about it. I feel so dumb right now.
    1 point
  12. I would say to use FileIO when using a disk that contains data, and you don't have any other disk that you can spare for the function of ISCSI. I would go for block when you are able to have a complete drive available for this function. My case: I have both, 1 4TB HDD completely available to Windows, and a 300GB image on my SSD for game storage (for VM and my gaming rig).
    1 point
  13. For the disk issue see here, NVMe issue is unrelated, it dropped offline: Jan 31 17:46:20 Unraid kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 Jan 31 17:46:20 Unraid kernel: nvme 0000:06:00.0: enabling device (0000 -> 0002) Jan 31 17:46:20 Unraid kernel: nvme nvme0: Removing after probe failure status: -19 Look for a BIOS update, this can also help sometimes: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
    1 point
  14. From the terminal, unraid-api stop should fix you up
    1 point
  15. Ah, I was wondering where those logs were going, thanks! I have never been able to get snapshots to work correctly. When I first set this up, I had trouble with it, so you suggested that I disable that feature and just let it run full backups, which has been working fine up until recently. Looking at the logs, I think I see maybe the issue. It appears that if a machine is suspended, rather than running or shut down, it gets confused and doesn't know what to do, so it errors out. This isn't a problem, since I just have to not suspend my VMs, no biggie. My next backup is scheduled to run at 2am tomorrow morning, so we'll see how it goes with all machines running. I expect it'll work fine, since I ran a manual backup the other day and it didn't have any problem shutting down and restarting the VMs as needed. Great work on this plugin, BTW. It's giving me a little peace of mind that I won't mess around an break something too badly. lol Now, if we just had in-GUI snapshot management, like in VMWare, now THAT would be killer....
    1 point
  16. can be several issues, most liekely even too many entries in the database from too many tries some approaches here
    1 point
  17. ...den Ordner "/UNRAID" sollte es auch nicht geben. Was sagt ein "mount" auf der Kommanozeile?
    1 point
  18. I understand, no problem. 😁 Thanks for the help/hints before, gives me ideas and a simple solution anyway.
    1 point
  19. code-server könnte dir da weiterhelfen. Ich hatte vor kurzem ein Tutorial zur Einrichtung auf Unraid gesehen, in dem der Content Creator code-server für die Bearbeitung seiner docker-configs genutzt hat. Dann sollten die Berechtigungen eigentlich kein Problem sein, da du ja lokal auf dem Server auf die Dateien zugreifst?
    1 point
  20. bingo bango found it...for those playing at home you can edit the .php file 'maximal_upload_size' =>
    1 point
  21. Die Datei ablegen unter /boot/extra Dann wird diese beim Start installiert Im laufenden Betrieb installpkg /Pfad/.... Gesendet von meinem SM-G981B mit Tapatalk
    1 point
  22. Ich würde den Hersteller ansprechen: https://unraid.net/contact Teile denen bitte die E-Mail Adresse mit, unter der Deine Unraid Lizenz läuft. Solange solche Probleme mit dem MyServers Plugin existieren, betrachte ich das Ganze als Beta:
    1 point
  23. Ah ok, again what learned Thanks for clearification. I am using it for matrix, with a secret and the default key paths of the container. Or should I delete the default paths prefilled in the template to have certs created? Edit: checked appdata and there are certs/keys, so looks good I guess..
    1 point
  24. I think there is some misunderstanding how this is working. STUN/TURN uses it's own protocols and if you proxy it through http/https you introduce maybe other issues because the STUN/TURN is a proprietary protocol and is different from http/https and uses it's own encryption, that's why the container is also creating certificates that are then used to encrypt the traffic. It uses it's own encryption and you don't have to proxy it. I think you are using your STUN/TURN server for Nextcloud Talk or am I wrong? If you are using it, you have to enter the Shared Secret and that is what is used to encrypt the traffic. Hope that makes sense to you.
    1 point
  25. oben noch den Haken uninstall setzen dass es direkt deinstalliert wird
    1 point
  26. An important clarification! bus 0x01 is not "built-in", bus 0x00 is. However, in the code in github it is (in)correctly set to bus 0x00. "Incorrectly" because the address of the nic is currently set at bus 0x00, slot 0x00 and function 0x00, which should be a reserved address for host bridge! This should be changed to bus 0x00, slot 0x02 and function 0x00
    1 point
  27. https://unraid-dl.sfo2.cdn.digitaloceanspaces.com/next/unRAIDServer-6.10.0-rc2-x86_64.zip Super weird, I confirm that I have the same issue with Squid's link, the one I copied from the original post seems to work ... and I don't see a difference. @randomninjaatk, does it work for you ?
    1 point
  28. Looked weird, but Squid had an explanation in the above answer🙂
    1 point
  29. Thanks for the link, had missed that information. Then I can safely installera the socker container.
    1 point
  30. Unraid shows a container as stopped if it shuts down before timeout. Normally a docker will shutdown gracefully, but if for some reason it doesn't, the timeout gives you a way to force shutdown without you needing to manually intervene.
    1 point
  31. The issue in the original post is solved. The issue you're all having is unrelated, it's related directly to the cloud issues we're currently having.
    1 point
  32. https://unraid-dl.sfo2.cdn.digitaloceanspaces.com/next/unRAIDServer-6.10.0-rc2-x86_64.zip
    1 point
  33. Those 2 "references" are actually identical and are because of a subtle change of how things are organized at lsio https://www.linuxserver.io/blog/wrap-up-warm-for-the-winter
    1 point
  34. Diese Meldung kommt bei mir sporadisch beim VM Start. Ich muss dann immer die VM Konfiguration löschen und exakt identisch neu einrichten. Die Image Datei lösche ich dabei natürlich nicht. Klappt bei mir zu 100%.
    1 point
  35. You also need to reboot. There's a note when you uninstall advising this.
    1 point
  36. Jan 29 04:47:51 Infinity kernel: mce: [Hardware Error]: Machine check events logged Jan 29 04:47:51 Infinity kernel: [Hardware Error]: Corrected error, no action required. Jan 29 04:47:51 Infinity kernel: [Hardware Error]: CPU:1 (19:21:2) MC11_STATUS[Over|CE|-|AddrV|PCC|-|CECC|-|Poison|-]: 0xd7894800017d60c0 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: Error Addr: 0x0000000000000000 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: IPID: 0x0000000000000000 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: L3 Cache Ext. Error Code: 61 Jan 29 04:47:51 Infinity kernel: [Hardware Error]: cache level: RESV, tx: INSN Looks like it's a typical Ryzen MCE and nothing to worry about.
    1 point
  37. I heared a rumor, that it will be out, as soon as Ed will finish working on it absolutely free of charge, voluntarily in his free time.
    1 point
  38. I tried this since yesterday aswell. The Tdarr Server Logs will tell you that it expected a different Node ID (the name you set). I think this is because even if you change the Node Port in unraids docker template, the config file in /app/configs still sets the Node Port to 8267. The Problem is, if I change the Port to the same as the Port I set in the template the Server cant contact the Node anymore. Now I finally got it working by using the "internal docker IPs". Probably not the cleanest way, but it works for me for now. This is how my templates are looking https://imgur.com/a/nOs1V3P
    1 point
  39. Dank dieses Threads habe ich auch dieses Mal die Nextcloud so gut wie reibungslos am laufen. Vielen Dank an alle Beitragenden. Nur ein Fehler ist noch da: Da dieser Nextcloudcontainer nicht so alt ist wie der Fehler nehme ich mal an daß es um die Datenbank geht. Was kann ich da tun? Dank und Gruß Martin
    1 point
  40. What I did to solve the problem - Terminal, cd to the config folder nano motioneye.conf input random text and save, to make nano create the file. Remove random text and save, to leave an empty file Start docker. Below is my log output for the docker, first before I did the above, then after: ---Before--- CRITICAL:root:failed to read settings from "/etc/motioneye/motioneye.conf": [Errno 2] No such file or directory: '/etc/motioneye/motioneye.conf' -----After-------- INFO: hello! this is motionEye server 0.42 INFO: hello! this is motionEye server 0.42 INFO: main config file /etc/motioneye/motion.conf does not exist, using default values INFO: cleanup started INFO: wsswitch started INFO: tasks started INFO: mjpg client garbage collector started INFO: server started
    1 point
  41. Zu dem nachträglichen Bearbeiten der Tags für bereits im System befindliche Dokumente: Natürlich kann man Dokumente auch nachträglich weiter MANUELL klassifizieren auch neue Tags definieren usw... (s.o.) Die Frage ist aber, ob eben auch für alte Dokumente eine Neuklassifizierung automatisch erzeugt werden könnte, unter zu Hilfename des neuronalen Netzwerks, was sich hinter dem Algorithmus 'Auto' verbirgt. Ich könnte mir vorstellen, das in diese Richtung die Frage von Smolo geht ?! Man müsste mal einen Test machen mit doch mindestens 500 Dokumenten im System: Beispiel - man hat den Tag 'Zahnarzt' bereits vergeben im system Ausserdem den Tag 'Rechnung' Nun geht man in die Konfiguration unter Tags und definiert einen neuen Tag 'Zahnarztrechnung' mit dem Algorithmus 'Auto' ohne Dokumente manuell mit dem Tag zu versehen. Wahrscheinlich wird das System nichts veränderen auch nicht nachdem der Docker ein paar Stunden läuft, aber das müsste man ausprobieren... Ich habe nämlich irgendwo gelesen, daß der Algorithmus ständig aktiv ist und regelmässig irgendwas verfeint ?! Weiss aber nicht mehr wo ich das gelesen habe ... Interessant ist, wenn man eine bereits verarbeitetes Dokument nochmals durch den 'consume' ordner ins system einfügt (ein Doppel) - es könnte sein, dass das Doppel/Neueingabe jetzt anders als im 'ersten Durchlauf' zusätzlich den Tag Zahnarztrechnung bekommen hat... eben mit dem Auto Ordner. Wenn das alte Dokument den neuen Tag nicht hat, wissen wir, dass der Algorithmus wirklich NUR für Neueingaben läuft. Das ist für mich aus der Dokumentation nicht ganz sonnenklar... Hier geht offenbar 'Probieren über Studieren' - mit dem Lesen der Doc ist es da (bei mir jedenfalls) nicht getan - aber ich muss hier auch noch einiges Testen, bevor ich mehr dazu sagen kann... die Eingabe des Archivs mach einfach noch zu viel Arbeit (alte Handschriften müssen noch erst eine Abschrift bekommen für die OCR Erkennung und weiteres mehr...) Also evtl. bitte mal selbst probieren.
    1 point
  42. Today I released in collaboration with @steini84 a update from the ZFS plugin (v2.0.0) to modernize the plugin and switch from unRAID version detection to Kernel version detection and a general overhaul from the plugin. When you update the plugin from v1.2.2 to v2.0.0 the plugin will delete the "old" package for ZFS and pull down the new ZFS package (about 45MB). Please wait until the download is finished and the "DONE" button is displayed, please don't click the red "X" button! After it finishes you can use your Server and ZFS as usual and you don't need to take any further steps like rebooting or anything else. The new version from the plugin also includes the Plugin Update Helper which will download packages for plugins before you reboot when you are upgrading your unRAID version and will notify you when it's safe to reboot: The new version from the plugin now will also check on each reboot if there is a newer version for ZFS available, download it and install it (the update check is by default activated). If you want to disable this feature simply run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have disabled this feature already and you want to enable it run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature needs an active internet connection on boot. If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS. It is now also possible to install unstable packages from ZFS if unstable packages are available (this is turned off by default). If you want to enable this feature simply run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have enabled this feature already and you want to disable it run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended). Please also keep in mind that for every new unRAID version ZFS has to be compiled. I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process. Currently the process is fully automated for all plugins who need packages for each individual Kernel version. The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation. If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.
    1 point
  43. Please add password protection or even better: Implement it into the usual Unraid GUI to force their credentials.
    1 point
  44. I just checked the global settings and while you can set it to SAT, you can not specify "12" in the global setting. My guess is that it wouldn't work properly. Luckily, this will get fixed in the next release.
    1 point
  45. @Squid Hello, you should definitely recheck this app. As you can see, it does not work and the dev doesn't even bother to answer. In fact, I even wonder how it could be accepted in the "Appstore" of unRAID, at first.
    1 point
  46. There are hooks in the Borgmatic config file for “before_backup” and “after_backup” that could be used to invoke an SSH command to tell the Unraid parent to mount/unmount volumes. For example (where Unraid's IP is 192.168.1.6) before_backup: ssh 192.168.1.6 mount /dev/sdj1 /mnt/disks/borg_backup after_backup: ssh 192.168.1.6 umount /mnt/disks/borg_backup
    1 point
  47. The shutdown option is now available in the version of the Parity Check Tuning plugin i released today. I would be interested in any feedback on how I have implemented it or any issues found trying to use the shutdown feature.
    1 point
  48. Setting MakeMKV to run as root user fixed the drive not being detected for me. Use PGID and PUID as 0.
    1 point
  49. Yes, you can do a New Config with retain ALL. Then select parity is already valid when starting the array. No ill effects.
    1 point
  50. unRAID's parity drive provides a lot of protection. But some failure scenarios actually corrupt parity - like a dying drive spewing junk. Doesn't matter if you have one parity or a 1000, this behavior will corrupt all parities and make parity recovery impossible. And when you are talking about low probabilities like dual drive failure - things like flood, hardware failure (e.g. PSU faliure causing spike), fire, theft, vandalism, and accident need to be considered. These are things that can early knock out all or a good percentage of the drives. Tri-parity+ does not help maybe as much as pictures of your hardware so you could file an insurance claim! Now I get it, a lot of people have media on their servers. And although any one (or even 100) media files might be recreatable, thousands+ are not. And since it is not economic for some of us to keep a large backup set, anything we can do to raise the probably of not loosing our data is worthwhile. In that spirit, with a large array, an extra parity can make sense. But I did a back of the napkins analysis and figured single parity would protect you some 95% of the time. And that dual parity would protect you maybe 0.5% extra. (And that was based on a 100% chance of a disk dying every year). A third parity, you are in the hundreths or thousandths of a percent advantage. Is that worth the price of a drive as large or larger than your largest drive? Two parities can really help in one case - with users that make mistakes. A shot in the foot or a poorly connected cable while trying to recover from one failure is hugely more likely than a second physical failure. That's why I recommend dual parity for new users, at least til they learn the ropes. I also recommend hot-swap cages to all but eliminate cable issues (the highest risk for data loss!) Another important thing to remember is that unlike RAID, losing 2 drives with 1 parity results in losing only 1 or 2 disks worth of data. So even in a large array, the scope of your data loss is limited. In an equivalent RAID5 array, you'd loose the entire array. So while data loss in RAID5 vs data loss in single parity, although equally likely, the impact is hugely more for RAID5. With the tools that unRAID gives you to put the array back together and say abracadabra, parity is valid again, many situations become recoverable even if not 100%. When RFS was popular, we found it to be very good at recovering from data corruption after a partial recovery and salvaging the lion's share of a disk even in less than optimal recovery conditions. Unfortunately RFS is now a very poor choice, and XFS is the most popular alternative. But XFS does not provide good recovery from corruption, so unfortunately that hurts our ability to recover. (@c3, maybe you can fix this in your spare time!) Don't know about BTRFS - if it is good at recovery from corruption, that might be a better long term choice. Would 3rd, 4th, and nth parity be useful to some. Probably not, although a lot of users would think it would be and put those protections in place. But if instead of loading more parities, if people used the extra disks for a backup and slipped into a safety deposity box, they'd have protected themselves 1000x more than the extra parity would protect them. And everyone should be using hot-swap bays to avoid cabling problems. If we focused on the real risks, and dropped the panacea that the real problem is drive failures, we'd all be more protected!
    1 point