Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 11/06/19 in Posts

  1. 7 points
    I've been doing this for a long time now via command line with my important VM's. First, my VM vdisk's are in the domains share, where I have created the individual VM directory as a btrfs subvolume instead of a normal directory, ie: btrfs subv create /mnt/cache/domains/my-vm results in: /mnt/cache/domains/my-vm <--- a btrfs subvolume Then let vm-manager create vdisks in here normally and create your VM. Next, when I want to take a snapshot I hibernate the VM (win10) or shut it down. Then from host: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup Of course you can name the snapshot anything, perhaps include a timestamp. In my case, after taking this initial backup snapshot, a subsequent backup will do something like this: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup-new Then I send the block differences to a backup directory on /mnt/disk1 btrfs send -p /mnt/cache/domains/myh-vm/backup /mnt/cache/domains/myh-vm/backup-new | pv | btrfs receive /mnt/disk1/Backup/domains/my-vm and then delete backup and rename backup-new to backup. What we want to do is add option in VM manager that says, "Create snapshot upon shut-down or hibernation" and then add a nice GUI to handle snapshots and backups. I have found btrfs send/recv somewhat fragile which is one reason we haven't tackled this yet. Maybe there's some interest in a blog post describing the process along with the script I use?
  2. 3 points
    I understand you don't want to compile yourself, but I don't particularly want to compile any more than the exisiting four/five builds I do with every release. If these kernel modifications are needed then they should be pushed upstream to LimeTech.
  3. 2 points
    I strongly disagree with this statement. The user must first consider what setup suits BEST in terms of DATA PROTECTION. FreeNAS relies on a RAID-like setup, in the sense that data is striped across multiple disks. This means if you have more failed drives than parity, you are guaranteed to lose ALL your data because effectively every single file will be missing a portion of it. Unraid is, as its name suggests, NOT RAID. Each data disk has its own file system and there is no striping (i.e. each file is stored fully only on ONE disk). This means if you have more failed drives than parity, you will only lose SOME of your data (the files actually saved on the failed drives). Each file on the working drives is still a complete file. For the vast majority of users, losing some data is preferable to losing all data. Available storage is a secondary concern because if one does not care about losing all data than one should not even bother with parity, hence, no parity, hence no available storage concern.
  4. 2 points
    Hi there, We have been working on a now open source unraid api. It is simply run as a docker container from the following template: https://github.com/ElectricBrainUK/docker-templates You can currently control multiple servers from a single container, start and stop VMs, edit USBs and PCI devices and edit and create VMs. Check out some screenshots of the basic UI I made to demo the app and contains docs of the REST endpoints (full docs available in the app but I will add to git soon): https://imgur.com/gallery/Ksje5BZ You can check out or PR to the source code here: https://github.com/ElectricBrainUK/UnraidAPI Let me know what you guys think and if you think it's useful. Cheers
  5. 2 points
    Note: this community guide is offered in the hope that it is helpful, but comes with no warranty/guarantee/etc. Follow at your own risk. What can you do with WireGuard? Let's walk through each of the connection types: Remote access to server: Use your phone or computer to remotely access your Unraid server, including: Unraid administration via the webgui Access dockers, VMs, and network shares as though you were physically connected to the network Remote access to LAN: Builds on "Remote access to server", allowing you to access your entire LAN as well. Server to server access: Allows two Unraid servers to connect to each other. LAN to LAN access: Builds on "Server to server access", allowing two entire networks to communicate. May require additional settings, TBD. Server hub & spoke access: Builds on "Remote access to server", except that all of the VPN clients can connect to each other as well. Note that all traffic passes through the server. LAN hub & spoke access: Builds on "Server hub & spoke access", allowing you to access your entire LAN as well. VPN tunneled access: Route traffic for specific Dockers and VMs through a commercial WireGuard VPN provider (see this guide) Remote tunneled access: Securely access the Internet from untrusted networks by routing all of your traffic through the VPN and out Unraid's Internet connection In this guide we will walk through how to setup WireGuard so that your trusted devices can VPN into your home network to access Unraid and the other systems on your network. Prerequisites You must be running Unraid 6.8 with the Dynamix WireGuard plugin from Community Apps Be aware that WireGuard is is technically classified as experimental. It has not gone through a full security audit yet and has not reached 1.0 status. But it is the first open source VPN solution that is extremely simple to install, fast, and designed from the ground up to be secure. Understand that giving someone VPN access to your LAN is just like giving them physical access to your LAN, except they have it 24x7 when you aren't around to supervise. Only give access to people and devices that you trust, and make certain that the configuration details (particularly the private keys) are not passed around insecurely. Regardless of the "connection type" you choose, assume that anyone who gets access to this configuration information will be able to get full access to your network. This guide works great for simple networks. But if you have Dockers with custom IPs or VMs with strict networking requirements, please see the "Complex Networks" section below. Unraid will automatically configure your WireGuard clients to connect to Unraid using your current public IP address, which will work until that IP address changes. To future-proof the setup, you can use Dynamic DNS instead. There are many ways to do this, probably the easiest is described in this 2 minute video from SpaceInvaderOne If your router has UPnP enabled, Unraid will be able to automatically forward the port for you. If not, you will need to know how to configure your router to forward a port. You will need to install WireGuard on a client system. It is available for many operating systems: https://www.wireguard.com/install/ Android or iOS make good first systems, because you can get all the details via QR code. Setting up the Unraid side of the VPN tunnel First, go to Settings -> Network Settings -> Interface eth0. If "Enable bridging" is "Yes", then WireGuard will work as described below. If bridging is disabled, then none of the "Peer type of connections" that involve the local LAN will work properly. As a general rule, bridging should be enabled in Unraid. If UPnP is enabled on your router and you want to use it in Unraid, go to Settings -> Management Access and confirm "Use UPnP" is set to Yes On Unraid 6.8, go to Settings -> VPN Manager Give the VPN Tunnel a name, such as "MyHome VPN" Press "Generate Keypair". This will generate a set of public and private keys for Unraid. Take care not to inadvertently share the private key with anyone (such as in a screenshot like this) By default the local endpoint will be configured with your current public IP address. If you chose to setup DDNS earlier, change the IP address to the DDNS address. Unraid will recommend a port to use. You typically won't need to change this unless you already have WireGuard running elsewhere on your network. Hit Apply If Unraid detects that your router supports UPnP, it will automatically setup port forwarding for you: If you see a note that says "configure your router for port forwarding..." you will need to login to your router and setup the port forward as directed by the note: Some tips for setting up the port forward in your router: Both the external (source) and internal (target/local) ports should be the set to the value Unraid provides. If your router interface asks you to put in a range, use the same port for both the starting and ending values. Be sure to specify that it is a UDP port and not a TCP port. For the internal (target/local) address, use the IP address of your Unraid system shown in the note. Google can help you find instructions for your specific router, i.e. "how to port forward Asus RT-AC68U" Note that after hitting Apply, the public and private keys are removed from view. If you ever need to access them, click the "key" icon on the right hand side. Similarly, you can access other advanced setting by pressing the "down chevron" on the right hand side. They are beyond the scope of this guide, but you can turn on help to see what they do. In the upper right corner of the page, change the Inactive slider to Active to start WireGuard. You can optionally set the tunnel to Autostart when Unraid boots. Defining a Peer (client) Click "Add Peer" Give it a name, such as "MyAndroid" For the initial connection type, choose "Remote access to LAN". This will give your device access to Unraid and other items on your network. Click "Generate Keypair" to generate public and private keys for the client. The private key will be given to the client / peer, but take care not to share it with anyone else (such as in a screenshot like this) For an additional layer of security, click "Generate Key" to generate a preshared key. Again, this should only be shared with this client / peer. Click Apply. Note: Technically, the peer should generate these keys and not give the private key to Unraid. You are welcome to do that, but it is less convenient as the config files Unraid generates will not be complete and you will have to finish configuring the client manually. Configuring a Peer (client) Click the "eye" icon to view the peer configuration. If the button is not clickable, you need to apply or reset your unsaved changes first. If you are setting up a mobile device, choose the "Create from QR code" option in the mobile app and take a picture of the QR code. Give it a name and make the connection. The VPN tunnel starts almost instantaneously, once it is up you can open a browser and connect to Unraid or another system on your network. Be careful not to share screenshots of the QR code with anyone, or they will be able to use it to access your VPN. If you are setting up another type of device, download the file and transfer it to the remote computer via trusted email or dropbox, etc. Then unzip it and load the configuration into the client. Protect this file, anyone who has access to it will be able to access your VPN. About DNS The 2019.10.20 release of the Dynamix Wireguard plugin includes a "Peer DNS Server" option (thanks @bonienl!) If you are having trouble with DNS resolution on the WireGuard client, return to the VPN Manager page in Unraid and switch from Basic to Advanced mode, add the IP address of your desired DNS server into the "Peer DNS Server" field, then install the updated config file on the client. You may want to use the IP address of the router on the LAN you are connecting to, or you could use a globally available IP like 8.8.8.8 This is required for "Remote tunneled access" mode, if the client's original DNS server is no longer accessible after all traffic is routed through the tunnel. If you are using any of the split tunneling modes, adding a DNS server may provide name resolution on the remote network, although you will lose name resolution on the client's local network in the process. The simplest solution is to add a hosts file on the client that provides name resolution for both networks. Complex Networks (added Oct 24) The instructions above should work out of the box for simple networks. With "Use NAT" defaulted to Yes, all network traffic on Unraid uses Unraid's IP, and that works fine if you have a simple setup. However, if you have Dockers with custom IPs or VMs with strict networking requirements, things may not work right (I know, kind of vague, but feel free to read the two WireGuard threads for examples) A partial solution is: In the WireGuard config, set "Use NAT" to No In your router, add a static route that lets your network access the WireGuard "Local tunnel network pool" through the IP address of your Unraid system. For instance, for the default pool of 10.253.0.0/24 you should add this static route: Network: 10.253.0.0/16 (aka 10.253.0.0 with subnet 255.255.0.0) Gateway: <IP address of your Unraid system> (Note that this covers the entire class B 10.253.x.x network, so you can add other WireGuard tunnels without having to modify your router setup again.) With these changes, your network should work normally. However, your WireGuard clients still may not be able to access Dockers on custom IPs or VMs. If you find a solution to this, please comment!
  6. 2 points
    @CHBMB If you ever put us internet nerds before your daughters movie watching I will be forced to uninstall the nvidia builds.
  7. 2 points
  8. 2 points
    I hope @limetech is able to help these guys out with the build issue since rc-6. The lack of movement is a little disconcerting that a valued community development provider like linuxserver can't get anywhere with them by now. hopefully will be resolved by the time 6.8 goes final. The unfortunate thing is everyone who uses this plugin is stuck on rc-5 which means less eyes on helping out with any bug reporting for later rc's
  9. 2 points
    My number 1 wish is better security https://forums.unraid.net/topic/80192-better-defaults/
  10. 2 points
    @theDrell The fix is now live in the latest beta and might get it's own point release as v1.50.2. If the beta fixes your problem please report back in this thread
  11. 1 point
    It might work, that's not an LSI firmware, so not sure what kind it is, you'd want the LSI IT firmware to be used.
  12. 1 point
    I'd try with changing the "php occ" part of each command to "sudo -u abc php7 /config/www/nextcloud/occ". That is unless you've changed your working directory to "/config/www/nextcloud/" with the command "cd /config/www/nextcloud/". Then you could use use "sudo -u abc php7 occ".
  13. 1 point
    If you have ryzen go into the bios go to the Power Supply Idle Control option and enable "Typical current Idle". It should be in the CPU freq options of your bios. This is a old ryzen bug. With the typical current idle option enabled you should not need any rcu_nocbs or any tweaks to the C states. Im running a Gigabyte gaming 5 ax370 using a 2700x and its rock solid. Also remember to keep your motherboard updated. https://bugzilla.kernel.org/show_bug.cgi?id=196683
  14. 1 point
    Have been looking for a NAS type storage for home media use. Found this. Was comparing between this and freenas and like this better. Just to be clear from some of the replies above on licensing, if I buy a license, it includes all future upgrades, correct? I’ve ran plex on a dell Poweredge server for years and was amazed at how quick the docker spun up. Have watched several spaceinvaderone YouTube videos pertaining to unraid. Thanks in advance for the help...
  15. 1 point
    Possibly parity is just invalid and needs syncing, this can also be confirmed by the parity checks, if the errors are exactly the same on both runs.
  16. 1 point
    You don't need this build to pass through the card to a VM and it doesn't have anything to do with what it is recognized as in Linux. It is the system devices list you are talking about?
  17. 1 point
    Since you don't have any VMs yet, you can just disable VMs in VM Manager then delete those libvirt folders. You don't want them on the array and if you recreate them they should get recreated on cache anyway. Same for Docker, disable and then delete those docker folders. Then you can recreate docker image and it should get created on cache where it belongs. Apps - Previous Apps will add your dockers just as they were. Do you know how to work at the command line to delete those folders? I usually just use the builtin mc (Midnight Commander) when working with files and folders directly on the disks. Then the system share should only exist on cache.
  18. 1 point
    Why not use libvirt's built in snapshot? https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
  19. 1 point
    That's the process that provides /mnt/usr/ and /mnt/usr0/ . It's layered on top of FUSE (Filesystem in User Space). Here's earlier threads about shfs but those are with it spiked at 100% CPU load, which yours was only showing 0.2 %: Though maybe yours isn't that bad when averaged out... Here's yours from 3.3 days (averaged cpu time per day of 29.09) 4462 root 20 0 145888 720 252 S 0.0 0.0 0:00.02 shfs 4475 root 20 0 1091060 17460 1080 S 0.0 0.2 96:10.02 shfs And From my system of 38.25 days (averaged cpu time per day of 18.27): root@TOWER:~# uptime 18:37:27 up 38 days, 6:04, 1 user, load average: 0.08, 0.07, 0.03 root@TOWER:~# top -bn1 | egrep -i shfs 6679 root 20 0 442648 33728 692 S 0.0 0.0 0:11.20 shfs 6692 root 20 0 787656 55648 828 S 0.0 0.0 699:01.70 shfs
  20. 1 point
    +1 for "Invalid unRAID`s MBR signature" on a (WD) shucked drive. Appreciate you effort gfjardim.
  21. 1 point
    You probably want https://github.com/binhex/documentation/blob/master/docker/faq/general.md or for a video https://forums.unraid.net/topic/54834-video-guideall-about-docker-in-unraid-docker-principles-and-setup/?tab=comments#comment-535908&searchlight=1 The broken links are a side effect of looking at a 3 year old post and some URLs changed during a forum software switch 2 years ago
  22. 1 point
    Please remove the preclear plugin and post diagnostics as requested. Also change disk with the duplicate uuid.
  23. 1 point
    There isn't really a "need" for a 2nd SSD. Each guide is specific for a particular use case so ask yourself if it's your use case or not. If the guide says you need a Ferrari, doesn't mean you need a Ferrari if you don't have a use for it. 1 GPU can only be used for 1 VM at any one time. It can be used across multiple VM's provided (a) the VM's do not run simultaneously and (b) the GPU doesn't have reset issues (mostly an AMD problem). (Docker) containers are independent of VM's. For Unraid, you can more or less consider docker containers equivalent to Android apps. Your incompatibility question is hard to answer. Of course there are hardware out there that are incompatible with a certain feature or down right not work or may or may not work depending on the exact config / model. For example, Intel 660p NVMe SSD can't be passed through via PCIe method (Linux kernel issue) Marvell controller basically doesn't work (again Linux kernel issue) Nvidia driver, under the right (wrong) conditions, will refuse to load with the infamous error code 43 i.e. detected that it's being used in a VM So really the only way to know for sure that there is zero issue is to confirm with someone with the exact same config. MacOS VM though is a rather niche use so I would recommend you check with SpaceInvaderOne for hardware issues. In terms of stability (which seems to be what you meant by "maintenance), once it's running, it won't randomly crash unless there's hardware issue (which isn't an Unraid problem). Bad RAM, for example, will crash stuff - you are just more likely to notice it with 24/7 server than a on-demand workstation. It's UNraid array. Very important to make the distinction that it is NOT RAID. For example, don't expect RAID-5 performance. I have Android phone, Linux, Mac and Windows accessing the Unraid server without any specific app. The specific apps is more for specific use case (e.g. if you want the nice Nextcloud interface instead of having to connect via smb). I have added 3TB, 4TB, 6TB, 8TB, 10TB in my array at different points. I wouldn't call it "tricky". Ability to use mixed-size HDDs is a selling point of Unraid so if it's tricky then it would defeat the purpose.
  24. 1 point
    Yes, but that's not on disk.cfg, you can create a script with the user scripts plugin and have it run at array start.
  25. 1 point
    I migrated from Xpenology to Unraid with no prior knowledge or real experience of implementing what I wanted. I now have a Gen10 Micro Server running; 1. Pihole - blocking and DHCP 2. Plex 3. Nextcloud 4. Photo library Plus a whole raft of other Dockers that I'm now using. Unraid has been a joyous learning experience. Everything is intuitive and reliable, dockers just work and the community is extremely helpful in the rare occasion things don't quit go to plan. Not sure about homebridge or photo editing but you can spin up a Mac for the latter. I'm finding it does what I wanted and much much more and would recommend it to anyone.
  26. 1 point
    ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  27. 1 point
    You'd lose any data on the emulated disk3 that doesn't exist on old disk3, i.e., any new data written after the replacement, all the other data will remain as it was, just make sure to assign old parity disk to the parity slot.
  28. 1 point
    Sooo muuchh better, Thank you!!
  29. 1 point
    Thanks SpaceInvader! I started over, followed those^ instructions, and got a working 2-core VM with passthrough. The key issue was just installing the nvidia drivers remotely when the screen was garbled. THEN I went and tried to change the # of cores in the VM (and made all the right edits to the XML again) and somehow this resulted in the VM not outputting to the display. It boots with a black monitor at 1280x1024 resolution. I can Splashtop in and use the VM no problem, but it won't even show me the full 2560x1440 resolution option in osx preferences. Any thoughts? Here is what I see in display preferences:
  30. 1 point
    Probably would have been better if you had recreated the docker image instead of moving it. And no point in having libvirt if you aren't doing VMs.
  31. 1 point
    I don't see anything obvious, except your system share has files on the array instead of cache where they belong. Possibly you created docker and/or libvirt image before you added cache so they got created on the array. You don't mention any VMs, do you have any? Docker image isn't full now but maybe you overfilled it in the past, so I guess it's possible you have docker image corruption, but syslog doesn't have much after the reboot so can't really tell anything from that. Didn't take the time to look at SMART for all of your disks. Are you getting any SMART warnings on the Dashboard? You might delete docker image and recreate it so it will be on cache. Apps - Previous Apps will add your dockers back just as they were. I'm not familiar with some of those plugins but CA and UD should be fine. Maybe try running without the others for a while. Setup Syslog Server so you can retain syslogs after rebooting and maybe we can tell more if you continue to have problems.
  32. 1 point
    Update to 6.8 6.7.2 has an issue with simultaneous reads and writes being very slow.
  33. 1 point
    Not in the parity array. You can use any valid BTRFS RAID level in the cache pool however. Whether or not that is advisable is up to your risk tolerance.
  34. 1 point
    If you use the option to Edit the VM then there is a toggle at the top right to switch between form and xml modes.
  35. 1 point
    Application Name: Heimdall Application Site: https://heimdall.site or https://github.com/linuxserver/Heimdall Docker Hub: https://hub.docker.com/r/linuxserver/heimdall/ Github: https://github.com/linuxserver/docker-heimdall Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.
  36. 1 point
    It's best to not mess with these files. I suggested in an earlier post that you set the UD Setting for the SMB shares to enable Hidden shares. The shares will not be browseable in Windows. Isn't this what you are looking for?
  37. 1 point
  38. 1 point
    Not just you... this docker has a very frequent update. The auto updater plugin is your friend in this case, set it and forget it.
  39. 1 point
    The nmap package was updated by me. The repository was not compromised. This has been discussed ad-nauseum. I removed the package, and then I agreed to leave nmap in as a convenience to users. I won't remove the package from the plugin. Conditional plugin installation is not easy. I'm not sure what could be done except your manual removal.
  40. 1 point
    Kernel update: * build for 6.8.0rc5 with 5.3.8 * add navi-reset.patch * add vega-reset.patch * add pci-reset-quirk.patch to thoes who can't do bios update
  41. 1 point
    There was an update for the container midnight and my server auto updated at 6am without problems, and looks like my dbengine database files are persistent now. So looks like the "delete obsolete charts files = no" setting solved the problem.
  42. 1 point
    You only need it for private shares. If you set the Security option to Private a Rule box appears into which you need to enter the code. Public shares are easier. I'd experiment with those first. On the client you mount an NFS share using the mount command like this: mount -t nfs tower:/mnt/user/name-of-share /mnt/mount-point which is similar to how you would mount an SMB share. Note that you have to specify the full path to the mount (i.e. tower:/name-of-share wouldn't work) and /mnt/mount-point must already exist on the client. To unmount the share you use either umount tower:/mnt/user/name-of-share or umount /mnt/mount-point This information is summarised here: https://linuxize.com/post/how-to-mount-an-nfs-share-in-linux/
  43. 1 point
    You can't use a subdomain of the main domain in the extra domain. That is why you get that error.
  44. 1 point
    Are they both on the same controller? Some controllers (old / ancient) only supported a max of 2.2TB
  45. 1 point
    1 thing I forgot to add. mount_unionfs will also protect your cloud storage since any change will be downloaded first and then any change is done (including delete - which unionfs just hides the file).
  46. 1 point
    Noted, and thankfully noted is all that's needed since you saved me from falling into a hole of disappointment if that's what I tried. Occasional gaming etc is enough for me in any case In this regard, Unraid gaming in general my lifestyle perfectly at the moment (kid, condo, job, minimal free time).
  47. 1 point
    You should recreate the docker image, but before doing it appears there was an unclean shutdown during a cache balance operation, and the cache pool is now stuck trying to continue the balance, better to backup anything on cache and recreate the pool, and since you're v6.8 it will be created with redundancy, unlike it was because of a bug with v6.7.
  48. 1 point
    The problem was that in 6.8.0-rc1 I could not access the the Flash Drive ( boot drive) from Krusader. The reason being that Limetech has changed the permission to the flash drive when it is mounted in the version. The only way in this version (and all future versions) to access the drive is through SMB or as user 'root'. (Understand that they are insistent that they are not going to change.) Currently, Krusader is being run as user 'nobody'. For anyone else looking for an answer, the answer is "YES". All you have to do is edit the Krusader Docker and change the PUID and PGID to that for the 'root' user. It is currently set for the 'nobody' user. On my system, the 'root' PUID is 0 and PGID is 0. I did run some tests and I did not find any problems after making the changes.
  49. 1 point
    Thus far no issues or errors, everything in the green and VMs are very responsive. Did not have proper performance tests beforehand, but VMs definitely slower and lagging before I pinned the cores, which is all gone now.
  50. 1 point
    I can confirm that Plex hardware transcoding continues to work with the above settings. Very happy with my setup now.