Jump to content


Popular Content

Showing content with the highest reputation since 01/10/20 in all areas

  1. 2 points
    Previous section in Community Applications does exactly that. Check off everything you want, done.
  2. 2 points
    Very nice. Similar to what we want to do, except I was also going to back up the LUKS headers first because I'm just that paranoid 😋
  3. 2 points
    Just my 2 cents on the controller front. If using iOS, iPeng is definitely worth it. For Android I think the options are pretty miserable TBH and no where near as polished as iPeng. I used Squeezer for a while and was reasonably impressed. Then I bought Orange Squeeze and IMO it is not that great... maybe a little bit better than Squeezer but not worth the asking price IMO. However my tip is to install the Material theme/skin for LMS and then point your Chrome browser to that. It has a wonderful responsive design so works equally well on a desktop as on a mobile device. You can get chrome on Android to create a nice little icon and place it on your homescreen, which will then open a chrome-less (app like) experience. Very nice indeed 🙂
  4. 2 points
    One option - if you can get to the command line, you could type something like this: /usr/local/emhttp/webGui/scripts/notify -e "Your Unraid server is not secured" -s "I found your Unraid on the Internet without a password" -d "You need to secure this before someone hacks you" -i "alert" That will give them a notification on the webgui and send them an email (if they have that configured)
  5. 1 point
    Sorry...should've explained! I was lucky enough to have recently migrated the VM in question to a second, non-Unraid box to use as a template for another project, so I was able to simply go grab a copy of the OVMF_VARS.fd file from there. Had that not been possible, I suppose I would've grabbed a clean copy of that file from here or here, the downside being the loss of my customized NVRAM settings. I didn't notice if any cores were pegged with this happened, but I rather doubt it, because in my case there was no boot activity--I didn't get to the Tianocore logo, nor even to the point of generating any (virtual) video output for noVNC to latch onto.
  6. 1 point
    Oh, and to fix your gpu issue, for single gpu on x570 you need to disable the framebuffer in unraid. Add this parameter to your syslinux.cfg: video=efifb:off When you boot unraid, you will get no video output after the bootloader. A gtx1070 should work fine with a good vbios. This was my previous setup before upgrading to a 5700XT + second gpu.
  7. 1 point
    Yep. Makes the job of recreating the docker image a few seconds of your time, a few minutes of downloading on a slow connection.
  8. 1 point
    In that case you should be fine, but still worth checking, some molex powered backlplanes still have 3.3v on SATA.
  9. 1 point
    Yep, the ExileMod Server URL changed. In the Docker template please click on advanced and change the server URL to: http://www.exilemod.com/ExileServer-1.0.4a.zip But I would also recommend you to delete the exilemod folder in your appdata and the container completely redownload it from the CA App and change the URL. At the time i'm doing a little code cleanup of all my containers and the ExileMod is one of the next containers that will get the update and then it will stop if it can't download a file (now it doesn't stop, it runns and loops over and over like in your case). Also i changed the template file but it will be updated in a few hours.
  10. 1 point
    In version 6.8.1 a new setting is introduced "Host access to custom networks" which allows Unraid to communicate with docker containers on the same (macvlan) network. Unfortunately 6.8.1 is missing a update which causes the new setting not to function yet, this will be corrected in 6.8.2.
  11. 1 point
    Thank you so much @johnnie.black and @trurl for all your help! I am back up and running with all my data fully parity synced and now just working on getting it all backed-up through a VM back into the cloud! While initially this all made me feel like my array was fragile, I now know that it is even more robust then I realized and have learned so much for going forward! Thanks again for stepping in and making that the case!
  12. 1 point
    I don't have active cooling on it, next I pull it out I'll see what I can do about adding some cooling. I have airflow as it's a SuperMicro chassis but it's not ducted over there. Watching things further I think the main issue could be the Mover process destroying performance when it runs. My cache drive is an M2 PCIE4 drive but when Mover fires the system becomes nearly unresponsive. Just frustrated I suppose as I bump the space limits on the 1TB cache drive moving videos around of late and performance tanks hard. I'm on 6.8 RC7 which has been stable but it looks like I'm two revs behind so perhaps there's help to be had there. I don't think I want the release 6.8 though as I think that was a kernel step backwards! Parity checks seem low as well with a speed of 113MB/s - 100+TB of space with a 10TB parity drive. Takes a day to check but other than being "slow" it doesn't impact things too badly.
  13. 1 point
    Schedule direct is very nice. I have found zap2xml just as easy and free!
  14. 1 point
    What size are those 30 drives? If some of the drive sizes are smaller it might be worth consideration on selling off some of the smaller ones and getting one or two larger ones.
  15. 1 point
    Good to see Squeezelite still going strong. Used to have it installed at work and stream music from home before the likes of Spotify and Plex came along. And IT clamped down on open ports.
  16. 1 point
    Tools -> Diagnostics -> attach zip file.
  17. 1 point
    First and foremost, do NOT disable Global C State control. With the latest BIOS, Global C State Control is no longer required to be disabled. It, in fact, improves performance when enabled on my 2990WX. I actually don't remember ever needing to disable it for stability - I believe it's a Ryzen problem and not a TR4 thing. Next, have you checked your CPU frequency while running? Is it thermal throttling? Are you using water cooling? I have seen several recent posts at various places about people complaining about poor performance on TR4, which turned out to be gunked up water cooling pump (especially the AIO kind). Lastly, docker constantly and inexplicably loading 100% on a single core is a symptom of pinning an isolated core to the docker. Isolation = core can ONLY be used by VM. Putting ANY isolated core on a docker will eventually cause that core to be loaded to 100% as the docker gets into a loop of trying to use a forbidden core.
  18. 1 point
    Thank your @Skitals for this awesome plugin. Also thank you @Raz for your themes. SolarizedDark looks beautiful and will be the theme of my server for a long time.
  19. 1 point
    Upgrade to v6.8.1 and the errors filling the log will go away, it's a known issue with some earlier releases and VMs/dockers with custom IPs.
  20. 1 point
    Also see this link: https://forums.unraid.net/topic/65785-640-error-trying-to-provision-certificate-“your-router-or-dns-provider-has-dns-rebinding-protection-enabled”/?tab=comments#comment-630080
  21. 1 point
    The vbios has to be specific to your exact specimen (brand + model + revision). So it's very likely that you used something that doesn't match your card. It is not uncommon for some models to not have vbios on TPU. I have even seen a vbios dumped from 2nd slot not working if the card is in the 1st slot but that seems rare (have only seen 1 report). So given you don't have a 2nd GPU, the only thing you can do is to try to get the right vbios from TPU (if it's available). I believe SIO has a guide on how to dump it from gpuz (which runs from Windows) so if you can somehow get a Windows installation up and running, you may be able to follow that guide to do it. Alternatively, you can also dump the vbios if you have another computer as well. Passing through the GTX 1080 as only GPU without vbios is unlikely to work due to error code 43. Even with the right vbios, you might still need some workarounds in the xml but we'll deal with that when we get there.
  22. 1 point
    Interesting. I use a three Squeezebox players at home (Radio, Receiver and SB3), but have had various alternatives over the years. One of them was a touch-screen O2 panel which I meant to sell on, but forgot). Plus Softsqueeze machines. The radio still wakes me up every morning. I use Volumio on a standalone Pi as a media player at gigs that can be controlled from a browser. However, haven't looked at it in a while (just set up and left it). As mentioned earlier, the receivers are the way to go for a "just works" solution. Once they're connected to your wi-fi, they're perfect. Also echo the fact that the Controller is awful.
  23. 1 point
    The groupings are without any ACS overrides.
  24. 1 point
    thank you. i managed to work it out in the end. but thank you so much for replying
  25. 1 point
    Sooooo.....I stopped using plex_rcs....I'm zhdenny on Github and I'm NOT by any means a programmer or have any talent in that arena. I merely did slight modifications to the original author's version of plex_rcs....just to keep it kicking along. That script is basically dead. Instead, I use plex_autoscan as @DZMM also suggested. I avoided using this at first because of all the dependencies and some of the dockers for it looked intimidating. Anyway, I took the dive and was able to get a plex_autoscan docker container to work for me on Unraid. For those curious, there are basically two options: A docker container which has Plex AND plex_autoscan all rolled in one docker. This is the easiest as it should be configured straight out of the box. The only issue is if you ALREADY have your own Plex docker setup and configured.....people do not typically want to migrate their plex setup into another container....can be done, but its just more to do. https://hub.docker.com/r/horjulf/plex_autoscan standalone plex_autoscan container. This is what I ended up using. You'll have to very carefully read the plex_autoscan docker container readme AND the plex_autoscan readme. All the container mappings and config.json file can get confusing. But when you finally figure it out, it just plain works great. Beware, you'll also need to grant plex_autoscan docker access to /var/run/docker.sock. You'll also have to chmod 666 the docker.sock. This is typically a no no but is necessary in order for plex_autoscan to communicate with the plex docker container. https://hub.docker.com/r/sabrsorensen/alpine-plex_autoscan I'm not gonna go into detail with this stuff....cuz frankly, everyone's plex setups are different and I really REALLY don't want to write a guide or explain in detail how to do this stuff.
  26. 1 point
    Now we’re cooking! The temperature returneth! Great work. Thank you for seeing this through! [emoji1303] —Sent from my iPhone using Tapatalk Pro
  27. 1 point
    I’m in! Cheers! Now to figure out how to use it! Thanks for your advice, much appreciated. Hope you have a pleasant weekend away from home. 👍🏻
  28. 1 point
    Meh, not to start a war of the OSes, but I don't think FreeBSD's major calling card is networking performance. Here's a study from 2019 showing FreeBSD holding its own, but not dominating Linux in any capacity: https://www.phoronix.com/scan.php?page=article&item=windows-linux-10gbe&num=4 And unless I'm mistaken, even Cisco UCS systems that support 40GBe+ connectivity don't even support FreeBSD. The main benefit for FreeBSD over Linux is it's licensing model. CDDL allows developers to advance the platform without having to contribute any improvements to the code base back to the FreeBSD maintainers (if they don't want to). This is why so many companies make hardware appliances that use FreeBSD for their base OS (they can fine tune their product for maximum performance and then charge a premium for their efforts without allowing someone to just rip off their code that they spent time creating). CDDL also allows for compatibility with Oracle's licensing on ZFS, which is why Oracle has never sued FreeNAS (and frankly I bet that was the deciding factor that led FreeNAS to use FreeBSD in the first place). Don't let any of this come across as me negating your reasons for needing FreeBSD though. A job's a job and if you need it for your job, well, then you need it. That said, I can't imagine the FreeBSD developers can really afford to let this issue linger. It's not like Linux KVM is a small platform anymore and I have to imagine the vast majority of FreeBSD users are using it in a VM (not bare metal). I could be wrong.
  29. 1 point
    If you are parring through a GPU to a VM then it cannot be shared with another running VM. You can only use the sane GPU in two (or more) VMs if they will not be running at the same time. Hardware pass-through is always a bit of a hit-or-miss scenario as it is highly dependent on your exact hardware, BIOS, Linux kernel and KVM versions (most of which are outside Limetech's control) Until you actually try it you can never be sure if it will work for you.
  30. 1 point
    It's also worth noting that FreeBSD bug reports really haven't gotten any official response. The users that made the bug reports are the ones making the progress. Would anyone on your team have connections to get some official eyes on the bug?
  31. 1 point
    Overview: Support for Docker image unraid-api in the electricbrain repo. Docker Hub: https://hub.docker.com/r/electricbrainuk/unraidapi Github/ Docs: https://github.com/ElectricBrainUK/UnraidAPI Discord: https://discord.gg/Qa3Bjr9 If you feel like supporting our work: https://www.paypal.me/electricbrain Thanks a lot guys, I hope this is useful!
  32. 1 point
    just make a full backup of your unraid usb drive before upgrade. i see in the screenshot that you are using Unraid server basic, so there is a key? you can check this here: tools -> registration
  33. 1 point
    TLSv1 is being obsoleted this Spring and TLSv1 and TLSv1.1 should be removed from nginx.conf: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; Major browsers are committed to supporting TLSv1.2 so there should be minimal issues.
  34. 1 point
    Not yet, we'll need to wait. Taking the opportunity to move this to stable reports since it's still an issue.
  35. 1 point
    This could be a cool add on to the web ui. While Krusader is useful, it does have its kinks. A file manage with the ability to open text docs, pics, video, etc would be cool. But even baring something that heavy, just the ability to move files or at the very least rename them would be great. Having to fire up Krusader just to rename files that I had to source elsewhere just so sonarr/radarr can then rename them again so plex can see them is a pain with how finicky krusader can be.
  36. 1 point
    did you try to start it as mentioned in the github repo? docker exec -u nobody -t binhex-minecraftserver /usr/bin/minecraftd console For me that works, but only one time. I'm somehow not able to detach from the screen session and have to close the window which leaves me with no way to attach again. I have to restart the container to regain that ability. So i switched to not using the console command, instead I'm using the command command which works way better for me. docker exec -u nobody -t binhex-minecraftserver /usr/bin/minecraftd command <some minecraft command here without leading slash> For all commands u can visit the Arch Wiki.
  37. 1 point
    Same happened to me... Do a diff on the files after you make changes and you can see that a lot more changes than what space invader says in his video. I tried to add a graphics card and wow...if you're new to this stuff you definitely started in about the hardest spot. The diff between XMLs did help though as you can see what is actually changing unexpectedly. Good luck.
  38. 1 point
    There is no 'free' version of Unraid anymore so you will not be able to upgrade unless you are prepared to buy a key. Are you prepared for that? You should always have backup of any critical data, preferably with at least one off-site copy. Your comments imply you have no backups which is very risky as you can potentially lose data for all sorts of reasons.
  39. 1 point
    Instructions For Pi-Hole with WireGuard: For those of you who don't have a homelab exotic enough to have VLANs and who also don't have a spare NIC lying around, I have come up with a solution to make the Docker Pi-Hole container continue to function if you are using WireGuard. Here are the following steps I used to get a functional Pi-hole DNS on my unRAID VM with WireGuard: 1a) Since we're going to change our Pi-hole to a host network, we'll first need to change your unRAID server's management ports so there isn't a conflict with Settings > Management Access: 1) Take your Pi-hole container and edit it. Change the network type to "Host". This will allow us to avoid the problems inherent in trying to have two bridge networks talk to each other in Docker. (Thus removing our need to use a VLAN or set up a separate interface). You'll also want to make sure the ServerIP is set to your unRAID server's IP address and make sure that DNSMASQ_LISTENING is set to single (we don't want PiHole to take over dnsmasq): 2) We'll need to do some minor container surgery. Unfortunately the Docker container lacks sufficient control to handle this through parameters. For this step, I will assume you have the following volume mapping, modify the following steps as needed: 3) Launch a terminal in unRAID and run the following command to cd into the above directory: cd /mnt/cache/appdata/pihole/dnsmasq.d/ 4) We're going to create an additional dnsmasq config in this directory: nano 99-my-settings.conf 5) Inside this dnsmasq configuration, add the following: bind-interfaces Where the listen-address is the IP address of your unRAID server. The reason this is necessary is because without it, we end up with a race condition depending on if the Docker container or libvirt starts first. If the Docker container starts first (as what happens when you set the container to autostart), libvirt seems to be unable to create a dnsmasq which could cause problems for those of you with VMs. If libvirt starts first, you run into a situation where you get the dreaded: "dnsmasq: failed to create listening socket for port 53: Address already in use". This is because without the above configuration, the dnsmasq created by Pi-hole attempts to listen on all addresses. By the way, this should also fix that error for those of you running Pi-hole normally (I've seen this error a few times in the forum and I can't help but wonder if this was the reason we went with the ipvlan set up in the first place). Now, just restart the container. I tested this and it should not cause any interference with the dnsmasq triggered by libvirt.
  40. 1 point
    Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\ You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
  41. 1 point
    I'm very much interested in hearing how things go.
  42. 1 point
    People, we have a genius here, seriously. It works! Moving the built-in audio to bus 0x00 and slot 0x02 finally makes AppleALC work. So, resuming: - AppleALC+Lilu kexts in CLOVER kexts Other folder - built-in audio bus 0x00 and slot 0x0y (where y is a number different from 0) - In clover config.plist: Devices --> Audio: Inject=No - In clover config.plist: Devices --> Audio: check ResetHDA - In clover config.plist: Devices --> Properties --> Devices: fill in with the proper address (use gfxutil with command gfxutil-1.79b-RELEASE/gfxutil -f HDEF) - In clover config.plist: Devices --> Properties --> Add property key: layout-id, Property value: x (where x is a number reflecting the audio layout, see here for layouts for supported codecs: https://github.com/acidanthera/applealc/wiki/supported-codecs ) (my working layout is 7), Value type: NUMBER - (Optional) In clover config.plist: Boot --> add boot arguments -liludbg and -alcdbg: this will work if you use the DEBUG kexts; when your system boots you can give this command in terminal to check for Lilu/AppleALC: log show --predicate 'process == "kernel" AND (eventMessage CONTAINS "AppleALC" OR eventMessage CONTAINS "Lilu")' --style syslog --source Unfortunately I cannot check for HDMI audio of my GPU as I don't have a hdmi cable, but from the attached log it seems ok (HDAU). Relevant parts of the xml for the vm: Passed through GPU and audio of the GPU: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/GTXTitanBlack.dump'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> As pointed out by Leoyzen, gpu and gpu audio must be in same bus (in this case 0x03) and different function (0x0 for gpu, 0x1 for gpu audio). Built-in audio (passed through): <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </hostdev> Built-in audio in bus 0x00. Siri is working ok. Clover settings for built-in audio I'm saving the VoodooHDA as a second option for the audio and use AppleALC. @Leoyzen May I ask you, what was the input to make me change the bus of the built-in audio? lilu-AppleALC-log.txt
  43. 1 point
    Port 80 is already used by the Unraid GUI. Map the first entry for the Container Port 80 to something that isn't used, like 8080, 8002, etc
  44. 1 point
    First, rename bzimage and bzmodules in /boot to backup the stock kernel; Then unzip the archive to /boot/ Finally reboot your machine. I try to add an entry in startmenu but I don't know how to specify the bzmodule to load
  45. 1 point
    I can't get my cert to work with Let's Encrypt. Whenever I request an SSL certificate from Let's Encrypt, I get an internal error from the IP address and port of the Nginx GUI. When I refresh the SSL certificates tab, it shows the certificates I requested, but the expiration date and time are the exact date and time I requested them. Not sure what could cause this.
  46. 1 point
  47. 1 point
    So after a variety of attempts from a number of forum posts, the following approach link1 link2 worked for me: >> lsscsi [2:0:0:0] cd/dvd HL-DT-ST BD-RE WH14NS40 1.03 /dev/sr0 And then hand modifying the ubuntu XML VM with <controller type='scsi' index='0' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host2'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> Other posts had suggested changing the controller value to "1" which did not work for me. I now have access to the Blu-ray drive from within the Ubuntu VM (it automatically detects a Audio disk insert and mounts it). I am now able to rip audio CDs which was my original objective.
  48. 1 point
    Im a complete idiot. This is because I put a number but not a G for gigabyte in the disk size.
  49. 1 point
    Strictly speaking step 5 is normally not necessary as KVM can handle .vmdk files directly. To do so you need to enter the path to the .vmdk file directly into the template as the unRAID GUI does not offer such files automatically.
  50. 1 point
    I figured it out. I needed to specify the byte offset of where the partition begins. For anyone who might have the same question in the future, here is what I did. From the unRAID command console, display partition information of the vdisk: fdisk -l /mnt/disks/Windows/vdisk1.img I was after the values in red. The output will looks something like this: [pre]Disk vdisk1.img: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xda00352d Device Boot Start End Sectors Size Id Type vdisk1.img1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT vdisk1.img2 206848 41940991 41734144 19.9G 7 HPFS/NTFS/exFAT[/pre] To find the offset in bytes, multiply the sector start value by the sector size to get the offset in bytes. In this case, I wanted to mount vdisk1.img2. 206848 * 512 = 105906176 Final command to mount the vdisk NTFS partition as read-only: mount -r -t ntfs -o loop,offset=105906176 /mnt/disks/Windows/vdisk1.img /mnt/test