Leaderboard

Popular Content

Showing content with the highest reputation on 01/14/20 in all areas

  1. I goofed up and updated to 6.8.1 without first looking to see if this was updated as well. Ill be more cautious in the future for sure!
    3 points
  2. It's as simple as browsing to Flash page via Main and clicking Flash Backup button. This downloads a zip file containing everything on your usb flash boot device. You can then use this file to feed into the USB Creator tool.
    2 points
  3. The easier way is click on the flash drive on the Main tab and select the Backup option.
    2 points
  4. Remove Unraid USB drive then connect to Windows PC and backup drive contents to PC. Then do fresh install of Unraid version you want on USB drive and copy the Config folder from your PC backup to the Unraid USB drive you just made. Plug USB back into your Unraid box and boot.
    2 points
  5. Application Name: DuckDNS Application Site: https://www.duckdns.org/ Docker Hub: https://hub.docker.com/r/linuxserver/duckdns/ Github: https://github.com/linuxserver/docker-duckdns/ Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.
    1 point
  6. Per the title.. Plan to use this as a workstation / high end gaming rig; running both Windows 10 and MacOS Catalina in UnRAID VMs [as well as Win 10 natively]. I'll add a new thread in the full build section once I get the machine up and running. Couple of days playing around so far: Latest MBoard BIOS - 2.50 [2019/11/13] Running in UEFI Stock clocks, voltages, etc for 3950X & RAM. 32 GiB RAM : G.Skill Ripjaws V 32GB 2 x 16GB DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit F4-3200C16D-32G Hoping/Planning to go to 64 GiB RAM and [maybe] 2x Vega VIIs Initial testing: VM#1 : MacOS Catalina - using SpaceInvader's Docker / template, plus some tweaks - 8 cores / 16 threads - 16 GiB RAM - Vega VII passed through in PCI-E slot #1 [PCI-E 3x16] - - no hardware acceleration / Metal / OpenCL yet Cinebench R20 - 4850 [Multicore] Native Win 10: - 1 TB PCI-E 4 4x NVMe Drive - 16 cores / 32 threads - 32 GiB RAM - Vega VII [stock everything] Cinebench R20 - 9454 [Multicore]
    1 point
  7. This is a bug fix and security update release. Due to a security vulnerability discovered in forms-based authentication: ALL USERS ARE STRONGLY ENCOURAGED TO UPGRADE To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Version 6.8.1 2020-01-10 Changes vs. 6.8.0 Base distro: libuv: version 1.34.0 libvirt: version 5.10.0 mozilla-firefox: version 72.0.1 (CVE-2019-17026, CVE-2019-17015, CVE-2019-17016, CVE-2019-17017, CVE-2019-17018, CVE-2019-17019, CVE-2019-17020, CVE-2019-17021, CVE-2019-17022, CVE-2019-17023, CVE-2019-17024, CVE-2019-17025) php: version 7.3.13 (CVE-2019-11044 CVE-2019-11045 CVE-2019-11046 CVE-2019-11047 CVE-2019-11049 CVE-2019-11050) qemu: version 4.2.0 samba: version 4.11.4 ttyd: version 20200102 wireguard-tools: version 1.0.20200102 Linux kernel: version 4.19.94 kernel_firmware: version 20191218_c4586ff (with additional Intel BT firmware) CONFIG_THUNDERBOLT: Thunderbolt support CONFIG_INTEL_WMI_THUNDERBOLT: Intel WMI thunderbolt force power driver CONFIG_THUNDERBOLT_NET: Networking over Thunderbolt cable oot: Highpoint rr3740a: version v1.19.0_19_04_04 oot: Highpoint r750: version v1.2.11-18_06_26 [restored] oot: wireguard: version 0.0.20200105 Management: add cache-busting params for noVNC url assets emhttpd: fix cryptsetup passphrase input network: disable IPv6 for an interface when its settings is "IPv4 only". webgui: Management page: fixed typos in help text webgui: VM settings: fixed Apply button sometimes not working webgui: Dashboard: display CPU load full width when no HT webgui: Docker: show 'up-to-date' when status is unknown webgui: Fixed: handle race condition when updating share access rights in Edit User webgui: Docker: allow to set container port for custom bridge networks webgui: Better support for custom themes (not perfect yet) webgui: Dashboard: adjusted table positioning webgui: Add user name and user description verification webgui: Edit User: fix share access assignments webgui: Management page: remove UPnP conditional setting webgui: Escape shell arg when logging csrf mismatch webgui: Terminal button: give unsupported warning when Edge/MSIE is used webgui: Patched vulnerability in auth_request webgui: Docker: added new setting "Host access to custom networks" webgui: Patched vulnerability in template.php
    1 point
  8. Fair enough, but IMO if booting into unRaid's GUI, then should be only using the built-in browser for accessing the GUI itself, and not for random web surfing. It's there only for that reason.
    1 point
  9. 1 point
  10. If one wanted to donate to show appreciation for the work the Dev's have done to make the Nvidia patch possible, where could one do that? I just started using the Nvidia patch of unraid with the 6.8.0 build and I couldn't tell you enough how awesome it is. I'd like to show my appreciation to the developers.
    1 point
  11. One cable only, the other IOM3 is for multipath/redundancy, only SAS drives with dual port support that and even with those you can only use one cable since Unraid doesn't support multipath.
    1 point
  12. 1 point
  13. Just rename config/network.cfg on flash so it will use default network settings, which includes DHCP. I always use DHCP instead of static and then reserve IPs by MAC in the router. That way there is only one place to manage the IPs for everything on my network.
    1 point
  14. That's going to be a fun one. I don't have 6.8.1 yet because I am using the nvidia plugin, but I'll see if I can figure out what is going on anyway and get back to you.
    1 point
  15. SFF-8088 to whatever NetApp uses, I believe it's QSFP
    1 point
  16. Anyone know WHEN the 6.8.1 is going to be out? I wont update till its out! Why is it so long after? 24 hours is acceptable not Dayzzzzzzzzz L33
    1 point
  17. Not exactly. What you're doing is redundant. Once the NVIDIA build is released, you just need to follow step 2. Step 1 is only necessary if you want to go to vanilla 6.8.1 while you wait for the NVIDIA build to be ready.
    1 point
  18. Glad you got it working, but looking again at my post I'm not sure if doing: /media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/ /media <-> /mnt/user/mount_unionfs/google_vfs/ will work. I'm no expert and what I do to also ensure I don't mess up when moving stuff around in dockers is just use these mappings for all my dockers: /user <-> /mnt/user/ /disks <-> /mnt/disks/ (RW Slave) That way all dockers are consistent and I don't have to remember mappings.
    1 point
  19. You mentioned that you wanted less than 30. Didn't seem too many to do manually if absolutely necessary.
    1 point
  20. Main>flash>flash backup. There will probably be a short delay while the zip is created.
    1 point
  21. I did the same lol patiently awaiting got nvidia 6.8.1 to be released
    1 point
  22. It's not bad one use to it. Really you can setup once and forget it. Sent from my Pixel 2 XL using Tapatalk
    1 point
  23. It's because of the "2nd Law of Entitled Asses Dynamics" which states: The total demand from entitled asses for an isolated software can never decrease over time. So in layman's terms, one cannot decrease the demand of entitled asses. At "best", it can only be shifted from LSIO to LT.
    1 point
  24. So many errors suggest parity wasn't valid at some point during what you did, as long as future parity checks return 0 errors you should be fine.
    1 point
  25. Funny enough, I am the hugest ME fan, and I never could get myself to play the Citadel DLC. Cause then it would truly be over. That was like 5 years ago. I am keeping a 4th full play through in my back pocket for nostalgia sake and will run through EVERYTHING one last time. Gotta finish The OuterWorlds first (which is fantastic BTW and I really don't like Fallout games per se). I played Andromeda for like 40 hours and my save got corrupted and I just could not get myself to start over. It was pretty bad. Those Faces Tho...
    1 point
  26. 1 point
  27. 1 point
  28. If you really need to access your server remotely, you should be using a VPN. Either openVPN or wireguard.
    1 point
  29. Thank your @Skitals for this awesome plugin. Also thank you @Raz for your themes. SolarizedDark looks beautiful and will be the theme of my server for a long time.
    1 point
  30. Why would unraid be forced to do that? They don't currently put out a new release when there is a new docker update, or an update to the Intel gpu drivers, or kernels, etc. They put out a release when they feel like it or when there is a major security fix. We on the other hand feel like we are forced to put out an update for this addon whenever there is a new unraid release. It is plenty of extra work for us.
    1 point
  31. Quick guide on what I did. Works and looks brilliantly. I'm running alturismo/xteve in Host mode. XTeVe Install XTeVe in Unraid Launch XTeVe Click on Playlist and add your m3u file Then Filter - this allows you to Filter IN channels based on their Grouping in the m3u file. I have a number for each sport group I have XMLTV - not needed for Emby Mapping - this is where the effort is invested. Adding the Sky Channel numbers that correspond to the Sky Channel. Matching them correctly will allow for them to be pulled into Emby with right channel logo EPG, etc. XMLTV file should be XTeVe Dummy. In Emby; Add the m3U created by xTeve. Will probably be - http://SERVERIPADDRESS:34400/m3u/xteve.m3u as TV Source under Live TV TV Guide Data Provider - I use SKy SD - Cable Refresh Guide Data Go into Emby and Live TV. Channels should be listed with full EPG. If you have multiple of the same channel then give them their own channel number in XTeVe and manually map the channel in Emby. Happy to help further if I can. My M3u is a mess but was worth the investment in time. I can also now record m3u via Emby and you can specify a custom record path in Emby itself.
    1 point
  32. @DZMM Hey, I haven't been on here for a bit to see the changes you've made. Just looked over them and wanted to say that all the revisions line up with the fixes I had made to mine. So everything should work smoothly. ____________________________________________ If anybody is on the fence, migration should be error/headache free now I've been running mine 24/7 for 2+ weeks now without a single issue. Much cleaner script and even though I wasn't getting bottlenecks or utilizing hardlinks... More optimized is always a plus in my books (+ a minor bump in pull/push speed is always appreciated). ____________________________________________ And as always -- Thank you so much @DZMM for the work you have done.
    1 point
  33. Im guessing here, but I think it would be better to have it run via User Scripts as a First Array Start Only. I recently tried to add a command using IMPITool to the go file but found that it would only work as a User Script. I think the reason for this is that IPMITool wasn't initialized yet and the array had to be started first. Since you are using the Nvidia UnRAID plugin, you may have the same issue I had with plugins not initializing fully yet (If Im correct about that anyway).
    1 point
  34. Also see this link: https://forums.unraid.net/topic/65785-640-error-trying-to-provision-certificate-“your-router-or-dns-provider-has-dns-rebinding-protection-enabled”/?tab=comments#comment-630080
    1 point
  35. The groupings are without any ACS overrides.
    1 point
  36. Sooooo.....I stopped using plex_rcs....I'm zhdenny on Github and I'm NOT by any means a programmer or have any talent in that arena. I merely did slight modifications to the original author's version of plex_rcs....just to keep it kicking along. That script is basically dead. Instead, I use plex_autoscan as @DZMM also suggested. I avoided using this at first because of all the dependencies and some of the dockers for it looked intimidating. Anyway, I took the dive and was able to get a plex_autoscan docker container to work for me on Unraid. For those curious, there are basically two options: A docker container which has Plex AND plex_autoscan all rolled in one docker. This is the easiest as it should be configured straight out of the box. The only issue is if you ALREADY have your own Plex docker setup and configured.....people do not typically want to migrate their plex setup into another container....can be done, but its just more to do. https://hub.docker.com/r/horjulf/plex_autoscan standalone plex_autoscan container. This is what I ended up using. You'll have to very carefully read the plex_autoscan docker container readme AND the plex_autoscan readme. All the container mappings and config.json file can get confusing. But when you finally figure it out, it just plain works great. Beware, you'll also need to grant plex_autoscan docker access to /var/run/docker.sock. You'll also have to chmod 666 the docker.sock. This is typically a no no but is necessary in order for plex_autoscan to communicate with the plex docker container. https://hub.docker.com/r/sabrsorensen/alpine-plex_autoscan I'm not gonna go into detail with this stuff....cuz frankly, everyone's plex setups are different and I really REALLY don't want to write a guide or explain in detail how to do this stuff.
    1 point
  37. If you think this is a simple compile on boot job, you have clearly not read much in this thread or understand anything about how much work, manual that is, to get this to compile. If it was a simple script job for each unraid release, it would have been released shortly after. But it is not. If you want to test unraid releases the second they are ready, run a clean unraid build.
    1 point
  38. I'm going to go out on a limb here and speak for @CHBMB. This plugin (and the DVB plugin) are built and maintained by him on a volunteer basis and it isn't always convenient for him (or anyone else) to continually keep up with every release as soon as it's released. There is significant time required to build and test each and every release which isn't always doable because of other commitments which are probably far more important. Cut him some slack. To be honest, at times I'm surprised due to the time required for this that he even bothers to support RC's.
    1 point
  39. I replicated my shares from the flexraid setup and just moved everything over on drive at a time, the only thing that is duplicated is the folder structure so it's not a big deal to continue over writing it as it just doesn't do anything. I used the krusader docker to do the moving. you could even create the folder structure a head of time using the linux commands without moving anything (Which is what I did for testing to see what it would actually look like with out moving my data, and sorry I don't remember what the command was that I used. But a quick google will work on finding the information). Keep in mind that you would create the root folders, and then copy the folders structure of each root folder. An example I had a root folder pictures so I create a share called pictures and then used krusader to copy/move pictures from drive 1 to the unraid shares drive, it will warn you that it's over writing pictures, you say yes and then it continues. Disk 2 you would do the same but this time you will get a second warning as disk 1 create the folder structure already, you will also say over write, and so on. The data that is already there will remain and the new data will also be written. You can test this if you wish with a small subset of drives using the onboard controller of you motherboard. I understand your concerns and the only way that you can be sure and put your mind at ease is to test it. Which is what I would suggest for anyone who is thinking of going down this road. I used some small disks and just played until I was sure/happy that it was the right way to do it. I just expanded my test box to my production box, this also allowed me to test drive replacing and recovery of disk function, which worked very well. As unraid is very flexible in this case it seems to handle hardware changes very well, there are outlining case (From what I've read) that don't always work like anything nothing is perfect. Hope this is helping/helped :).
    1 point
  40. Just saw it again for the first time since I set the -i br0 WSD option in SMB settings. It's been about four weeks.
    1 point
  41. This could be a cool add on to the web ui. While Krusader is useful, it does have its kinks. A file manage with the ability to open text docs, pics, video, etc would be cool. But even baring something that heavy, just the ability to move files or at the very least rename them would be great. Having to fire up Krusader just to rename files that I had to source elsewhere just so sonarr/radarr can then rename them again so plex can see them is a pain with how finicky krusader can be.
    1 point
  42. did you try to start it as mentioned in the github repo? docker exec -u nobody -t binhex-minecraftserver /usr/bin/minecraftd console For me that works, but only one time. I'm somehow not able to detach from the screen session and have to close the window which leaves me with no way to attach again. I have to restart the container to regain that ability. So i switched to not using the console command, instead I'm using the command command which works way better for me. docker exec -u nobody -t binhex-minecraftserver /usr/bin/minecraftd command <some minecraft command here without leading slash> For all commands u can visit the Arch Wiki.
    1 point
  43. Here is the original post I made about this in case you missed it: https://forums.unraid.net/topic/61985-file-browser-in-webui/page/2/?tab=comments#comment-723474 What I am suggesting there for "surprise" #1 is not a warning, but instead that any operation that would cause the "User Share Copy Bug" not be allowed at all. The problem is at the Linux level of things, but at a higher level it isn't that hard to detect when a move or copy would cause this problem. And for "surprise" #2, the suggested workaround is explained there in that post. So, if a file explorer were implemented, it would need to do some things other than just passing it to Linux to carry out the move or copy.
    1 point
  44. Instructions For Pi-Hole with WireGuard: For those of you who don't have a homelab exotic enough to have VLANs and who also don't have a spare NIC lying around, I have come up with a solution to make the Docker Pi-Hole container continue to function if you are using WireGuard. Here are the following steps I used to get a functional Pi-hole DNS on my unRAID VM with WireGuard: 1a) Since we're going to change our Pi-hole to a host network, we'll first need to change your unRAID server's management ports so there isn't a conflict with Settings > Management Access: 1) Take your Pi-hole container and edit it. Change the network type to "Host". This will allow us to avoid the problems inherent in trying to have two bridge networks talk to each other in Docker. (Thus removing our need to use a VLAN or set up a separate interface). You'll also want to make sure the ServerIP is set to your unRAID server's IP address and make sure that DNSMASQ_LISTENING is set to single (we don't want PiHole to take over dnsmasq): 2) We'll need to do some minor container surgery. Unfortunately the Docker container lacks sufficient control to handle this through parameters. For this step, I will assume you have the following volume mapping, modify the following steps as needed: 3) Launch a terminal in unRAID and run the following command to cd into the above directory: cd /mnt/cache/appdata/pihole/dnsmasq.d/ 4) We're going to create an additional dnsmasq config in this directory: nano 99-my-settings.conf 5) Inside this dnsmasq configuration, add the following: bind-interfaces Where the listen-address is the IP address of your unRAID server. The reason this is necessary is because without it, we end up with a race condition depending on if the Docker container or libvirt starts first. If the Docker container starts first (as what happens when you set the container to autostart), libvirt seems to be unable to create a dnsmasq which could cause problems for those of you with VMs. If libvirt starts first, you run into a situation where you get the dreaded: "dnsmasq: failed to create listening socket for port 53: Address already in use". This is because without the above configuration, the dnsmasq created by Pi-hole attempts to listen on all addresses. By the way, this should also fix that error for those of you running Pi-hole normally (I've seen this error a few times in the forum and I can't help but wonder if this was the reason we went with the ipvlan set up in the first place). Now, just restart the container. I tested this and it should not cause any interference with the dnsmasq triggered by libvirt.
    1 point
  45. Snapshots (For VMs and shares) would be huge. Right now it absolutely stops me from being able to recommend this to even a small business. iSCSI would be huge for me in a home scenario. Right now I run a VM on top of unRAID just to do iSCSI, which is an unnecessary pain. A secondary array that uses ZFS is again something I have to use a VM for at the moment, which I would love to see baked into unRAID natively.
    1 point
  46. ****Edit: I found it. Turns out it was a setting in Deluge not placing the leading "/" on my path for downloaded files. Not sure why it used to work then suddenly stopped, but that was the answer. I am suddenly getting the following error in both Sonarr and Radarr when they try to move downloaded files. I haven't made any changes to my configuration. I am guessing and update to either Sonarr, Radarr, or Deluge had a change I didn't notice and I can't figure out what setting I need to modify to get it back to working as it was before. "not a valid *nix path. paths must start with / Parameter name: path" Can anyone help me find the correct setting to change so files are moved to the correct directories?
    1 point
  47. So after a variety of attempts from a number of forum posts, the following approach link1 link2 worked for me: >> lsscsi [2:0:0:0] cd/dvd HL-DT-ST BD-RE WH14NS40 1.03 /dev/sr0 And then hand modifying the ubuntu XML VM with <controller type='scsi' index='0' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host2'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> Other posts had suggested changing the controller value to "1" which did not work for me. I now have access to the Blu-ray drive from within the Ubuntu VM (it automatically detects a Audio disk insert and mounts it). I am now able to rip audio CDs which was my original objective.
    1 point
  48. Also a screen session can cause this - easy to forget they might still be hanging around
    1 point