Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 10/25/20 in all areas

  1. 1 point
    October 2020 [GUIDE] Installing UnRaid (ver. 6.83) on ProxMox (ver. 6.2-4) Background; I have been using UnRaid for close to a year, and have just started using ProxMox a few weeks ago. However I have been using other virtualization technologies and hypervisors for years, Hyperconverge (Cisco HyperFlex, Nutanix) and Hypervisors (VMWare, Hyper-V, VirtualBox, etc) for years. I wanted to virtualize UnRaid on ProxMox, but found complicated older forum posts, and thought there should be a better way. I experimented and hope I found an easy way to set this up. I just wanted to share to the community. 1. Download the Unraid Server OS and make a USB key. Take notes USB 2.0 is generally better than USB 3.0/3.1 for USB drives used for Unraid. Note, Unraid does not install to a drive, it will only and always boot from the USB drive/key. It installs into memory on each boot, and runs from there.(https://unraid.net/download) 2. Make sure to boot the unraid OS on your server without proxmox running, in other words, you can have proxmox installed, but boot from the key once to make sure it is going to work with your hardware. I went through three USB drives, even two of the same type, before I found one that would work! If you have DHCP set up on your network, you should get a DHCP IP address that you recognize from your network. In my case my network is 10.0.0.x. If you get an IP that starts with 169.254, then this generaly means the usb drive is bad, you will probably see some comments above about not finding files. You can also login to unraid via console and try deleting /boot/config/network.cfg and rebooting to see if that fixes it. More than likely it is an incompatible USB drive. 3. Reboot back into ProxMox 4. Make a VM with the following settings Memory; UnRaid likes memory, so make sure you give it enough. Processors; Make sure you go to the bottom of the list of processors and pick the "host" type. This passes through your hosts processor. This will limit you on migrating your UnRaid VM to another host, but you can't do that anyway, since it is linked to your USB drive. BIOS; Use SeaBIOS Display; use the default if you wish. I love SPICE, so that is what I use. If you use spice you have to have the viewer installed on your workstation. Machine; Default i440fx SCSI Controller; VirtIO SCSI I tried the others, but this works fine. CD/DVD Drive; ProxMox will not boot a USB drive in a VM, however you can boot an ISO, and there is an ISO that will boot a USB drive... See where I'm going there? Make your life easy and download plopkexex (https://www.plop.at/en/plopkexec/download.html) extract it and then upload plopkexec64.iso to your ISO share. Then mount that ISO in your VM. Tada no config, and it just works. Hard Disk; Make a hard disk or disks for your VM. I really don't recommend that you make several virtual disks and set them up inside of Unraid and make a array, just make one and go with that. You can always make a larger one later and copy your files across to the larger one inside of unraid at a later time if you want to go larger or smaller. Lastly, you can pass through direct hard drives, and other direct USB drives, if you want Unraid to control them directly, then you can setup an array just like you normally would. However if you are doing that, you might as well delete proxmox and set your server up with unraid, as unraid will do VM's and Docker locally. Network Device; use e1000. I had problems getting any other virtual NIC to work other than the e1000. You may end up getting no IP for your IP address on UnRaid, or the 169.254 type address. You can also login to unraid via console and try deleting /boot/config/network.cfg and rebooting to see if that fixes it. USB Device; You must pass through the USB Device that your Unraid is installed on, if you do it this way, and not the other way of copying your USB contents to a virtual drive, your unraid experience will be the same as if you have it running on a non-virtualized server. What I mean is that changes will get copied back to the USB drive like they should. 5. Boot your VM In summary, there are a few gotchas, getting correct NIC type, getting a USB stick to boot, and the big one, getting a good install on a USB that will boot correctly on your hardware. I have tested a VM in Unraid (I'm NOT going to run any in Unraid under Proxmox, but it does work, and I have tested several dockers. All work as expected.
  2. 1 point
    I think you've got it right. Where I often see this is helping users with their docker setup and trying to go the other way with cache-prefer data. Somehow they get system share on cache and array, probably by starting dockers with missing cache. This may result in duplicate docker.img for example.
  3. 1 point
    Wir sollten uns hier auf das eigentliche Thema besinnen und auf Antwort von moonsorrox warten...
  4. 1 point
    SMART looks OK but doesn't look like an extended SMART test has ever been done on that disk. This seems likely. From syslog it looks like a connection problem to me. You should always double check all connections, all disks, power and SATA, including splitters, any time you are mucking about inside. Here are some relevant excerpts from syslog: Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#37 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=9s Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#37 CDB: opcode=0x88 88 00 00 00 00 00 38 26 11 50 00 00 00 20 00 00 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942018896 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018832 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#33 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=9s Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018840 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#33 CDB: opcode=0x88 88 00 00 00 00 00 38 26 0f 50 00 00 02 00 00 00 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018848 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#91 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=9s Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018856 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942018384 op 0x0:(READ) flags 0x0 phys_seg 64 prio class 0 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#91 CDB: opcode=0x88 88 00 00 00 00 00 38 26 0d 50 00 00 02 00 00 00 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942017872 op 0x0:(READ) flags 0x0 phys_seg 64 prio class 0 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018320 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942017808 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018328 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942017816 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942017840 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018336 ... Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942011224 Oct 25 13:21:44 NAS rc.diskinfo[12723]: SIGHUP received, forcing refresh of disks info. Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022904 Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022912 Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022920 Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022928 It looks like it may have been read failures that really started it. When Unraid can't read a disk it will try to write the emulated data back to it and if that write fails then the disk gets disabled. It would be mostly guess work to say that the disk may not be very far out-of-sync if nothing was really writing to it and the emulated data is what was already on the disk. If you want to take a chance, you could unassign the disk and then mount it read-only as an Unassigned Device to check its contents. If it seems to be OK you could New Config / Trust Parity. If it doesn't look OK then you could reassign and rebuild. In any case a parity check should be done, either to confirm it wasn't out of sync or to confirm the rebuild went well. If you really want to take a chance you could even postpone that so you can use the server, but if anything is out-of-sync then rebuilding a real failure could be compromised. Do you have good (enough) backups?
  5. 1 point
    ... I have all my files back... Again, I want to thank you for being such a great help and for responding so quickly... I guess I chose the best system... Now just to make sure I get to understand the system a bit better...
  6. 1 point
    Unraid is an Operating System, so it needs its own computer to run on. Typically that will be a computer that can have disks installed in it. Some people have used external enclosures as a way to add space for drives to their computer that is running Unraid, but this approach can have some complications of its own.
  7. 1 point
    Read the first few pages of this thread. Be sure that you Unraid server is the Local Master. Be sure that the Workgoup names match on the server and on the Windows client. https://forums.unraid.net/topic/89452-windows-issues-with-unraid/ You can also try to access the server with the IP address by entering it in address bar of Windows Explorer as \\192.168.1.252 Also make sure that these are turned on: and this:
  8. 1 point
    Von der ersten Generation würde ich schon fast abraten. Klar, du bekommst sie saubillig hinterher geworfen, aber für das gleiche Geld bekommst du auch schon die 2te Generation, die besseren RAM Support bietet, stromsparender ist bei mehr Leistung. Alle Ryzen profitieren von schnellerem RAM und da ist halt die erste Generation für mich eigentlich schon raus, da es da viele Probleme gab. Und mit nem BIOS Update für dein Board wird sogar auch schon die dritte Generation unterstützt, die ne ganze Schippe nochmal drauf gepackt hat. Wenn die 5000er verfügbar sein werden, werden die Preise für die 3xxx auch ganz schnell purzeln. Ich würde hier nicht voreilig nen 1700 kaufen. Gerade in Spielen ist der Unterschied zw. 1st und 2nd Gen und dann zur 3ten schon enorm. Den Gedanken hatte ich erst, als ich den Post schon abgesendet hatte, evtl. könntest du dir ja irgendwo leihweise ne Karte zum Testen mal besorgen. Ich nutze VMs mit Unraid als DAily Driver nun schon seit 3 Jahren. Angefangen mit nem TR4 1950x für über 2 Jahre und nun seit nem halben Jahr nen 3960x. Klar ist kein eigentlicher Ryzen, die Chiplets und die Unterstützung von RAM sowie der Support von Unraid ist quasi identisch. Gab beim 1950x genauso RAM Probleme zum Start wie bei den 1000ern auch, wenn nicht sogar noch mehr Probleme, da Quadchannel und mehr Komponenten aufm Board. Nach dem ersten Monat rumgefrickel und Austausch mit dem Forum hier, lief dann aber auch alles. 2 VMs mit jeweils eigener GPU sowie USB Controller und ner durchgereichten NVME. Die 2000er Generation war für mich kein Grund zum Upgrade, trotz mehr möglichen Kernen und besserem Support. Support für 3000x Threadripper auf TR4 hat AMD leider gestrichen und so gabs bei mir halt nen Umzug auf ne neue Plattform. Was soll ich dir sagen, lief grundsätzlich alles von Beginn an out of the box und die Unterstützung für Virtualisierung hat sich nochmal verbessert. Aufsplittung der einzelnen Geräte in einzelne IOMMU Gruppen ist nun ohne extra Patch möglich. Ähnliches haben auch viele User von der 3ten Generation Ryzen berichtet, dass sich das verbessert hat. Du kannst inzwischen viel einfacher Geräte an VMs durchreichen, als noch auf der ersten Generation. Die einzigen Probleme die ich in der Zeit hatte kamen eigentlich nur zustande, da ich immer mal wieder Unraids RC Versionen getestet hab. Probleme nachvollziehbar hier im Forum gemeldet und zack beim nächsten stable Release war nen Fix dabei. Achja, Windows Update hatte mir nen Treiber für nen durchgereichten USB Controller zerschossen, wäre aber auch ohne Unraid passiert. Windoof halt 😬 Allgemein zu deinem Board nochmal zu erwähnen, du hast halt leider nur EINEN Slot für ne GPU und 2 PCIE x1, 1xNVME und 4x Sata. Da ist eigentlich keine Luft für ne Erweiterung. Glaub mir, wenn du einmal auf den Geschmack gekommen bist, was du alles mit Unraid machen kannst (Docker, VM, Datengrab) dann willst/brauchst du auch ganz schnell mal noch nen Slot für nen HBA, weitere NVME oder noch mehr Sata Steckplätze für noch weitere Platten. Was auch paar mal schon angebracht wurde "für Unraid selber eine eigene GPU" ist eigentlich irrelevant. Unraid ist im Prinzip dafür ausgelegt headless zu laufen. Das einzige wo es Sinn macht, wenn du nen Plex Docker nutzt und für die Video Dekodierung ne extra GPU brauchst aber selbst das ist fraglich, ob das jeder braucht. Wer streamt schon 5-10 Videos im Haushalt parallel. Limetech selbst sagt, dass die Nutzung von Unraid mit ner GPU für die Darstellung einer GUI nur für die Fehlerdiagnose Sinn macht und eigentlich auch nicht wirklich empfohlen wird. Und selbst das kannst du. Bei mir startet Unraid wenn ich den entsprechenden Booteintrag wähle direkt mit der ersten GPU und würde mir nen Desktop zeigen mit nem Browser. Aber ganz ehrlich, brauchst du nicht. Ich hatte nicht einmal in 3 Jahren darauf zugegriffen. Zumal es eh in ner extrem kleinen Auflösung dargestellt wird und ne Qual ist zu bedienen. Übers Netzwerk in nem Browser oder aus ner VM heraus die GUI aufgerufen ist viel praktischer.
  9. 1 point
    Edit the script if you want to, or let the script create the directory. create another instance of the upload script and choose copy not move or sync Yes - create more instances of the mount script and disable the mergerfs mount if you don't need. If you want the other drives in your mergerfs mount, add the extra rclone mount locations as extra local folder locations in the mount script that creates the mergerfs mount.
  10. 1 point
    Just wanted to report back and say there have been no issues after moving the flash drive from the back motherboard ports to the front panel usb port. Thanks a lot for the speedy help!!! Any idea if the IRQ16 issue could be controller/hub or single port issue?
  11. 1 point
    Ussually mover is run by schedule but some times we run mover manually. When we run it manually it would be nice to know how long will take mover to copy all the files. I feel safe not working with files while mover is running. (sure is paranoia but I feel safe) Would it be possible to add some kind of % bar showing information about the mover process? At least I will know how long will it take. Thankyou Gus
  12. 1 point
  13. 1 point
    Nein, er ging nie in den Standby.....weil er aus irgendeinem Grund dann nicht mehr aufgewacht ist. Den Stromverbrauch habe ich mir ehrlich gesagt nie ausgerechnet - vermutlich auch aus Angst vor dem Ergebnis....!
  14. 1 point
    Make sure BIOS is up do date, though some boards might not have this option, but they should.
  15. 1 point
    You don't have to encrypt your files if you don't want to. Just create an unecrpyted rclone remote. This is very easy to do - if you need help doing this there are other threads (although you can probably work out what you need to do in this thread) as this thread is for support of my scripts In my scripts, RcloneMountShare="/mnt/wherever_you_want/mount_rclone" - doesn't matter as these aren't actually stored anywhere LocalFilesShare="/mnt/ua_hdd_or_whatever_you_called_it/local" - for the files that are pending upload to gdrive MergerfsMountShare="/mnt/wherever_you_want/mount_mergerfs" - doesn't matter as these aren't actually stored anywhere I've just checked my readme, and once you've worked out how to setup your remotes, which isn't covered but shows what they should look like afterwards, all the information you need is there https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/README.md
  16. 1 point
    actually really well - everything is working perfectly.
  17. 1 point
    Hi@ghost82, Sorry for the late reply. But bussy lately including finishing a new 14 core skylake x build that replaced one of my main workstations. Indeed i have multiple networks .. 1 x1GB and 1x10GB both passed through cards. But you opencore device config trick worked like a charm . The xml only method not and likely as you suggested because of multiple networks. Although i remember having it working in the past when one of the 2 was a virtual network and the other a passes though card. But the virtual was never stable enough so moved to real metal Tnx again.
  18. 1 point
    Sorry for the misunderstanding, I thought I was in the undervolting power save thread.
  19. 1 point
    I use gigabyte, and TBH I don't really worry about it. HPA is only an issue if it winds up on the parity disk, and anecdotally the BIOS only ever pops in an HPA if one doesn't already exist on another drive, and if the system attempts to boot off of a hard drive instead of the flash. IE: easy to mitigate as in the BIOS you just set the only boot device to be the flash drive (pretty much what you want anyways). But, if it does wind up on the parity, it's fairly easy to get rid of it anyways. Any other drive that its on, I don't really care. I'm not going to be the ~1 Meg of storage space it takes up.
  20. 1 point
    Not too sure if that is valid even without a space. You can only put an IP address on its own (that is how I have always used it ) or a FQDN. As @jzawacki mentioned above, the docker has no method to support changing the port so any attempt to change the port to a non default one will not work. The docker has to run on the default port, regardless of if that is convenient for you or not.
  21. 1 point
    here you go but some of these settings depends on meshcentral config inside container. and this is my redacted meshcentral config file - i redacted it and removed all unused config parts. i'm using meshcentral behind my Apache reverse proxy. { "__comment__" : "This is a sample configuration file, edit a section and remove the _ in front of the name. Refer to the user's guide for details.", "settings": { "Cert": "mesh.example.com", "WANonly": true, "_LANonly": true, "SessionKey": "redacted7", "_CookieIpCheck": false, "CookieEncoding": "hex", "_IgnoreAgentHashCheck": true, "Port": 8443, "AgentPort": 443, "AgentAliasPort":443, "_AliasPort": 443, "RedirPort": 80, "_ExactPorts": true, "WebRTC": false, "_Nice404": false, "ClickOnce": true, "_SelfUpdate": true, "_AgentPing": 60, "AgentPong": 300, "AllowHighQualityDesktop": true, "TrustedProxy": "192.168.50.248", "MpsPort": 0, "AutoBackup": { "backupIntervalHours": 24, "keepLastDaysBackup": 5, }, "MaxInvalidLogin": { "time": 10, "count": 10, "coolofftime": 10 }, "Plugins": { "enabled": true } }, "domains": { "": { "Title": "example mesh", "Title2": "", "NewAccounts": false, "novnc":true, "mstsc":true, "CertUrl": "https://192.168.50.248:443/", }, }, }
  22. 1 point
    Yes, i'm running Meshcentral in docker for about a 5 months. here is my repo: https://hub.docker.com/r/uldiseihenbergs/meshcentral i can post a screenshots with config page, just ask.
  23. 1 point
    I never did. I ended up moving to a Gigabyte BRIX (NUC) that I had laying around, and it running under Windows 10.
  24. 1 point
    <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> this tag is what you need to change
  25. 1 point
    The commands are for the Unraid terminal not the docker console.
  26. 1 point
    I tried updating but Deluge v2 seems to be really buggy. Trackers just stop updating. Guess I'll see if PIA will issue refunds.
  27. 1 point
    My system log runs full and I know definitely why. Is there a way to clear it without a restart?
  28. 1 point
    Why? You might be able to delete old logs (syslog.1, syslog.2, ...) in /var/log. Doubt you can delete the current syslog.
  29. 1 point
    A little bit more information please... Which version 6.8.x or 6.9.0beta25? What did they update exactly (btw wanted to build also a new build for my server in a few minutes, will look into it). These should be the steps for flashing the Mellanox cards: Download the firmware for you card: https://www.mellanox.com/support/firmware/connectx2en extract the binfile to your server lets say to one of your shares (in this example 'firmware') open up a terminal and got to the share that you've copied the firmware (in this example 'firmware'): 'cd /mnt/user/firmware' then type in: '/sbin/lspci -d 15b3:' and you should get something like: '07:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)' then type in: 'mstflint -d 07:00.0 -i firmware.bin burn' (replace '07:00.0' with the device ID from the output from step 4 and also replace 'firmware.bin' with the name of the extracted binfile from step 2) this should start burning/flashing the firmware (It's a little different since it's based on the open source files) Have you also installed the Unraid-Kernel-Helper Plugin? Can you also send a screenshot from it? EDIT: Opened a issue on Github because of this issue: tar -C /usr/src/libnvidia-container/deps/src/nvidia-modprobe-396.51 --strip-components=1 -xz nvidia-modprobe-396.51/modprobe-utils ######################################################################### 100.0%curl: (28) Failed to connect to codeload.github.com port 443: Connection timed out gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now make[1]: *** [/usr/src/libnvidia-container/mk/nvidia-modprobe.mk:34: /usr/src/libnvidia-container/deps/src/nvidia-modprobe-396.51/.download_stamp] Error 2 make[1]: Leaving directory '/usr/src/libnvidia-container' make: *** [Makefile:223: deps] Error 2 EDIT2: In the meantime I can send you my image, it's built with nVidia, Mellanox Tools, DigitalDevicesDVB and also iSCSI support. EDIT3: I think Github has some problems with redirection, if I try the command multiple times it works everytime after the 2nd or 3rd attempt. EDIT4: Everything is now working again, also please redownload or update the container since I've added a check for this to not happen again and also implemented multicore and better compression of the images itself.
  30. 1 point
    You can check with: lspci -vv Your HBA is x8.