Jump to content


Popular Content

Showing content with the highest reputation on 10/09/20 in all areas

  1. 2 points
    This blog is a guide on how to securely back up one Unraid server to another geographically separated Unraid server using rsync and Wireguard by @spx404. If you have questions, comments or just want to say hey, post them here! https://unraid.net/blog/unraid-server-to-server-backups-with-rsync-and-wireguard
  2. 2 points
    The time has nearly come. Just finishing up documentation.
  3. 2 points
    You have just described how almost all software functions ­čĄú
  4. 1 point
  5. 1 point
    The purpose of UPS is to allow safe shutdown, not to allow running on batteries. If you run the batteries down you will have to let them recharge sufficiently before starting your server again or you won't have enough battery for safe shutdown. After a short time on batteries, you can assume the power is going to be off for an unknown time and it should just go ahead and shutdown.
  6. 1 point
    i just did this and it was super easy. docker exec -it nextcloud updater.phar the highest it would go though is 19. something, no 20.
  7. 1 point
    i just followed the manual update process shown on the first post.
  8. 1 point
    Awesome, thanks. I figured out with the Network bit on Acronis I can just type the address of my Nas (\\tower) and it lets me in. I got it figured out. Thanks!
  9. 1 point
    I agree. Slapped the speedtest cli into the container and ran it just to see how quick it is: Speedtest by Ookla Server: Fibrenoire Internet - Montreal, QC (id = 911) ISP: Performive Latency: 11.25 ms (0.81 ms jitter) Download: 558.25 Mbps (data used: 723.1 MB) Upload: 283.67 Mbps (data used: 322.1 MB) Packet Loss: 0.0% Result URL: https://www.speedtest.net/result/c/7215d13b-346b-4fb4-9eb1-d2f150bffb25 Speedtest by Ookla Server: Connect it Networks - Montreal, QC (id = 22079) ISP: Performive Latency: 11.57 ms (0.84 ms jitter) Download: 566.00 Mbps (data used: 545.8 MB) Upload: 291.65 Mbps (data used: 343.3 MB) Packet Loss: 0.0% Result URL: https://www.speedtest.net/result/c/ca7f430e-dbc9-49a6-94ca-69e7fe83d793 Ran it twice to make sure it wasn't a fluke. Super speedy!
  10. 1 point
    Guinea pig reporting in. Switching to WireGuard was simple and worked flawlessly. I'm on the east coast and now getting 800Mbps into Montreal. Thanks for adding WireGuard to your containers. Had to buy you that beer for all the hard work you do on all your Unraid containers.
  11. 1 point
  12. 1 point
  13. 1 point
    Nope oder besser Jap, mit Proton kannst eigentlich auch Windows Spiele spielen. Siehs dir mal an. Astroneer kann ich zB best├Ątigen das es einwandfrei l├Ąuft. EDIT: Proton ist in Steam eingebaut und musst eigentlich nur aktivieren.
  14. 1 point
    After syslog "rolls over", a new syslog is created and the previous one is renamed syslog.1 Looks like the actual problem filling log is bad disk4 which you should replace. Why aren't you asking about that instead??? Do any of your other disks show SMART warnings on the Dashboard page? You must setup Notifications to alert you immediately by email or other agent as soon as a problem is detected. Don't let one unnoticed problem become multiple problems and data loss!!!
  15. 1 point
    Thank you! I have done asked some pretty stupid questions here and that probably was on the top of that list. Set it to mnt/user/isos share and all is good now.
  16. 1 point
    Aaaaaand fixed - thanks for your help @binhex - and thanks for all the work you do with your dockers, they're awesome!
  17. 1 point
  18. 1 point
    here is a snippet from your log:- 020-10-09 14:41:09,753 DEBG 'start-script' stdout output: [info] Port forwarding is enabled [info] Checking endpoint 'swiss.privateinternetaccess.com' is port forward enabled... 2020-10-09 14:41:10,251 DEBG 'start-script' stdout output: [warn] PIA endpoint 'swiss.privateinternetaccess.com' is not in the list of endpoints that support port forwarding, DL/UL speeds maybe slow [info] Please consider switching to one of the endpoints shown below 2020-10-09 14:41:10,251 DEBG 'start-script' stdout output: [info] List of PIA endpoints that support port forwarding:- [info] ca-toronto.privateinternetaccess.com [info] ca-montreal.privateinternetaccess.com [info] ca-vancouver.privateinternetaccess.com [info] de-berlin.privateinternetaccess.com [info] de-frankfurt.privateinternetaccess.com [info] france.privateinternetaccess.com [info] czech.privateinternetaccess.com [info] spain.privateinternetaccess.com [info] ro.privateinternetaccess.com [info] israel.privateinternetaccess.com 2020-10-09 14:41:10,251 DEBG 'start-script' stdout output: [info] Attempting to get dynamically assigned port... 2020-10-09 14:41:10,288 DEBG 'start-script' stdout output: [warn] PIA VPN port assignment API currently down, terminating OpenVPN process to force retry for incoming port... so you have two issues, firstly you are trying to force strict port forwarding whilst connecting to a endpoint that doesnt support port forwarding, see above, and secondly you are using PIA legacy network, you most probably want to switch over to next-gen as legacy network will be shutdown at the end of this month (according to pia), instructions for switching are here Q19:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  19. 1 point
    this means that you lost all settings (including container settings). It is advisable to make regular backups of the USB drive either by clicking on it on the Main tab or by using the CA backup plugin. The Previous Apps feature relies on templates that are stored on the USB drive. You wiped this thus losing the templates which means the containers need setting up again. Since you still have the appdata folders intact the apps will find their working files intact if you use the same settings as you used previously.
  20. 1 point
    Awesome! I'm glad that I haven't entirely lost my mind
  21. 1 point
    Danke "Ich777" f├╝r die ausf├╝hrliche Antwort. Ich werde auch einen Post bzgl Snapshot auf die englischsprachige Wishlist setzen. Bzgl negativer Leistungsf├Ąhigkeit der Synology gebe ich dir uneingeschr├Ąnkt recht (sobald mehr als NAS Funktion verlangt wird) Die Snapshot Funktion m├Âchte ich nur als Anregung verstanden wissen, da ich wei├č, dass das LimeTech Team nicht sehr gro├č ist und vermutlich Kapazit├Ąten fehlen. Es muss ja auch nicht mit 6.9 kommen. Ich lese mir selbst auch immer die Bug Reports zu 6.9 durch. Den Link bzgl Libvirt Fehler kann ich dir nicht schnell geben. War vor ca. 1/2 Jahr. Habe mit der Fehlermeldung im Bootdialog von Unraid gesucht und bin hier im Forum auf die Posts gekommen, welche das gleiche Problem hatten und als Fehlerquelle das VM BackUP Plugin ausgemacht hatten. Nachdem ich es deinstalliert hatte, waren die Fehlermeldungen beim Booten von Unraid weg! Noch mal auf Synology zur├╝ck zu kommen. Ich wei├č, dass es sich hier um eine gro├če Firma mit mehr M├Âglichkeiten handelt. Jedoch ist die Snapshot Funktion bei anderen Distros z.B. in ESXI Proxmox etc vorhanden. Unraid wird ja oft als Hypervisor mit ├Ąhnlichen (reduziert) M├Âglichkeiten beschrieben. F├╝r mich pers├Ânlich w├Ąre es halt ein immenser Mehrwert. Folgende VM's laufen gerade bei mir "Xpenology f├╝r Surveillance Station (hab Kamera Lizensen ├ťberwachung f├╝r mein Haus), Home Assistant hassio, Ubuntu DNS Server, zwei Windows 10 Maschinen (1x Arbeiten bzgl Programme und 1x T├╝ffteln und Gaming mit durchgereichter GTX1660). Die Windows Maschinen sind jedoch nur bei bedarf an. Gr├╝├če aus Bayern
  22. 1 point
    Hi All, First I hope this is the right forum, otherwise please move. TLDR: My server looks like the hunchback from Notre Dame... Well I have over some years build a Supermicro server and I have not been kind. For those interested here is a part list. Case: Supermicro SC846 M/B: X9DRI-LN4F+ CPU: 2x Intel Xeon E5-2630L v2 (2.4GHz) RAM: 24x16GB = 384GB HDD: total of 23 (one with a broken connector fixed by soldering an extansion cable directly to the PCB) SSD: 8x256GB = 1TB (RAID 10) Slot1: Asus HYPER M.2 X16 CARD with 4x256GB NVMe Slot2: ASMedia ASM1142 USB 3.1 Slot3: GPU blocked Slot4: Nvidia GTX 1660 Super Slot5: Nvidia GTX 1030 Slot6: LSI MegaRAID SAS 2008 Slot7: USB from motherboard to external connector Now the observant reader will notice that my GTX 1030 is not blocking a slot. But that is not all. This case does not have PCIe connectors, nor does it have designated space fro 8x SSD. I solved the PCIe problem with help from ebay where I found some M/B CPU splitters and CPU->PCIe converter. 2 of each and I can now have 2 Graphic cards installed. But the GTX 1030 is NOT directly inserted in the PCIe slot. Again with help from ebay I found a PCIe x16 extender and two in serial I am now able to have it inside the case on top of the CPU shroud. Last, the configuration of the M/B results in that there are 2 USB2 cntrollers onboard. 1 controls all the external connectors, 1 controls the DOM port and the BMC (the IPMI mouse and keyboard). Since I want a VM with USB access, it will make most sense to pass through the USB controller that controls the external ports. But that makes it impossible to attach a keyboard for console use, as the internal DOM port is used for the unRAID key. So a I found a small USB hub and extended two of the ports via a bracket in Slot7. Now I can have both mouse and keyboard and still boot from the USB key. If you wonder how his looks here is a picture, and I admit it is not pretty. And now back to the title, after this, my server name is now Qausimodo. /Alphahelix
  23. 1 point
    I have found the problem, at last.. It seemed that there was an old instance of unRaid present on the harddisk that I bought from the store. Apparently it was an returned SSD after 30 days to that store, which they sold to me as "new". Having 2 unRaid disks present on the system, unRaid kinda freaked out. What fixed it was to format/purge the SSD. ­čĄô
  24. 1 point
    Kannst mal einen link Posten zu den Threads? Am besten w├Ąre hier einen Post zu machen und auch zu voten: Limetech konzentriert sich jetzt mal auf die 6.9.0RC (bei der auch evtl schon Nvidia integration enthalten sein soll). Snapshots werden evtl zuk├╝nftig in Unraid unterst├╝tzt wann kann ich dir aber noch nicht sagen da Limetech das immer ausgiebig testet ob das feature auch wirklich so funktioniert wie es soll. Berechtigter Einwand nur musst du beachten das wir uns eigentlich im Consumer Bereich befinden, nur so als Hintergrundinfo: Unraid wurde urspr├╝nglich als NAS only entworfen ohne irgendwelche zus├Ątzlichen Features mit deren mehr oder wenig einzigartigem Array. Unraid bem├╝ht sich auch Features nach und nach einzupflegen nur darfst du nicht vergessen das es viele Anfragen gibt bzw. auch Bugs (speziell mit neuen Platformen wie AMD) die jetzt mal ausgemerzt werden mussten u.a. auch eines aktuellen Kernel bedurften was wieder zu neuen Problemen f├╝hrte. Ich hab nur einmal eine Synology gehabt und die war auch schnell wieder weg da ich von der Leistung und von den Features mehr als entt├Ąuscht war f├╝r den Preis. Von welchen VM's willst du Snapshots machen (OS)? Bei BTRFS Snapshots kann ich dir nicht wirklich helfen aber evtl haben da andere mehr Ahnung davon aber hier mal ein Betrag der dir vielleicht weiterhilfe:
  25. 1 point
  26. 1 point
    I honestely don't know, never did much testing on that, time permitting I'll try doing some over the weekend, with different workloads and a couple of controllers, to see if there's any clear difference one way or the other.
  27. 1 point
    You can disable those healthchecks by adding --no-healthcheck to the Extra Parameters of each docker container in the advanced view.
  28. 1 point
    I am using 6.9.0-beta29 (will go to 30 ASAP) and do see those writes too. Noticed that only 2 of my containers end up writing to these 2 files which are the only containers that seem to have a docker health check available. Here is the reason why you see a write every 5 seconds for your plex container https://github.com/plexinc/pms-docker/blob/master/Dockerfile#L62 HEALTHCHECK --interval=5s --timeout=2s --retries=20 CMD /healthcheck.sh || exit 1
  29. 1 point
    Sorry, there's no individual control with the Supermicro boards.
  30. 1 point
    Not sure on the RPM difference. Something to do with the fan or ipmi but not much I can do. You are correct on the PWM requirements. I have the fan control turn off the smart control.
  31. 1 point
    Similar to this bug: https://github.com/moby/moby/issues/40183 but 6.8.3 is using Linux 4.19.107 which I verified has the patch referred to at end of that topic. So.. I'd ask you test on latest 6.9-beta but understandable if you don't want to do that.
  32. 1 point
    You are probably correct that the ipmi commands may be different for this board. I would need the correct commands to make it work. I have emailed Asrock support on this to see what the correct commands are, doesn't look like their official guide over here is correct for my board: https://www.asrockrack.com/support/faq.asp?id=38 Are there any ways I can figure out the raw commands to use via the ipmitool that you are aware of? I haven't had much luck searching for that answer, I'm more than happy to debug this at a lower level if I knew where to look.
  33. 1 point
    Great, i got ur pm too i dont know what they have changed to make it work, (qemu update, maybe) beta29 and beta30, nested vm in windows vm is working, prior to beta29, the whole unraid would just freeze when try to enable hyperv or run any vm, (including the android emulator) no script change is needed.
  34. 1 point
    guinea pig time again - wireguard support now in, if you are interested then see here:- https://forums.unraid.net/topic/44109-support-binhex-delugevpn/?do=findComment&comment=433617
  35. 1 point
    Well, that was weird. With the 3 spaces it does work. Thanks! One question: would it be possible to use the container as a gateway? Currently i'm using an ubuntu server vm configured as in the video below (with mods). That's because using an openvpn as a proxy the service i try to connect to detects the VPN (i guess there are leaks). With a gateway everything works as intended. Thanks.
  36. 1 point
    You don't quite need all those super advanced techniques as they only offer marginal improvement (if any). And then you have to take into account some of those tweaks were for older gen CPU (e.g. numa tuning was only required for TR gen 1 + 2) and some were workarounds while waiting for the software to catch up with the hardware (e.g. cpumode tweaks to fake TR as Epyc so cache is used correctly no longer required; use Unraid 6.9.0-beta1 for the latest 5.5.8 kernel which supposedly works better with 3rd-gen Ryzen; compile your own 6.8.3 with 5.5.8 kernel for the same reason, etc.) In terms of "best practice" for gaming VM, I have these "rules of hand" (cuz there are 5 ­čžÉ ) Pick all the VM cores from the same CCX and CCD (i.e. die) would improve fps consistency (i.e. less stutter). Note: this is specific for gaming VM for which maximum performance is less important than consistent performance. For a workstation VM (for which max performance is paramount), VM cores should be spread evenly across as many CCX/CCD as possible, even if it means partially using a CCX/CCD. Isolate the VM cores in syslinux. The 2020 advice is to use isolcpus + nohz_full + rcu_nocs. (the old advice is just use isolcpus). Pin emulator to cores that are NOT the main VM cores. The advanced technique is also pin iothreads. This only applies if you use vdisk / ata-id pass-through. From my own testing, iothread pinning makes no diff with NVMe PCIe pass-through. Do msi fix with msi_util to help with sound issues The advanced technique is to put all devices from the GPU on the same bus with multifunction. To be honest though, I haven't found this to make any diff. Not run parity sync or any heavy IO / cpu activities while gaming. In terms of where you can find these settings 3900X has 12 cores, which is 3x4 -> every 3 cores is a CCX, every 2 CCX is a die (and your 3900X has 2 dies + an IO die). Watch SpaceInvader One tutorial on youtube. Just remember to do what you do with isolcpus to nohz_full + rcu_nocs as well. Watch SpaceInvader One tutorial on youtube. This is an VM xml edit. Watch SpaceInvader One tutorial on youtube. He has a link to download the msi_util. No explanation needed. Note that due to the inherent CCX/CCD design of Ryzen, you can never match Intel single-die CPU when it comes to consistent performance (i.e. less stutter). And this comes from someone currently running an AMD server, not an Intel fanboy. And of course, running VM will always introduce some variability above bare metal.
  37. 1 point
  38. 1 point
    Stop the VM, then left click on the VM name (not the icon). Then you'll see your disk devices listed. In the capacity column, the size can be clicked on and edited.
  39. 1 point
    How do I move or recreate docker.img? The easy way to move docker.img - * Go to Settings -> Docker -> Enable Docker, and set to No, then click the Apply button (this disables Docker support) * If recreating your docker.img file, then switch to Advanced View, then check off the box and press Delete, then skip the next step * Using mc or any file manager or the command line, move docker.img to the desired location (/mnt/cache/docker.img is recommended) * In Settings -> Docker, change the path for Docker image to the exact location you just copied to * Now set Enable Docker back to Yes, and click the Apply button again (re-enabling Docker support) The standard way to move or recreate docker.img is to stop Docker support, delete the current image, re-enable Docker support and recreate the image in the desired location (/mnt/cache/docker.img is recommended), then re-add your current templates. Your settings should be safe, and nothing else is moved or changed, so once your templates are restored and the Dockers are restarted, they *should* work just the same. All of that is found in a guide by JonP at - ***OFFICIAL GUIDE*** Restoring your Docker Applications in a New Image File An easier way to reinstall your applications would be to go to the Apps Tab, Previous Apps Section. Then check off all of your previous applications and hit "Install"