Jump to content

dnLL

Members
  • Posts

    219
  • Joined

  • Last visited

Everything posted by dnLL

  1. It seems like they have the same ID. root@server:~# exportfs -v /mnt/user/backups <world>(rw,async,wdelay,hide,no_subtree_check,fsid=100,anonuid=99,anongid=100,sec=sys,insecure,root_squash,all_squash) /mnt/user/discord <world>(rw,async,wdelay,hide,no_subtree_check,fsid=100,anonuid=99,anongid=100,sec=sys,insecure,root_squash,all_squash) /mnt/user/qbittorrent <world>(rw,async,wdelay,hide,no_subtree_check,fsid=101,anonuid=99,anongid=100,sec=sys,insecure,root_squash,all_squash) How is that possible? can it be fixed without deleting the share and/or rebooting?
  2. All right, this is extremely weird. root@server:~# showmount -e Export list for server: /mnt/user/qbittorrent * /mnt/user/discord * /mnt/user/backups * Tested from multiple CentOS VMs, if I mount /mnt/user/discord, it actually mounts /mnt/user/backups. However, if I mount /mnt/user/discord/something, it correctly mounts the discord sub-folder. I've seen weird things but this... What I've tried: restarting the NFS service, rebooting the VMs, manually mounting the NFS vs using /etc/fstab, disabling the discord NFS share and reenabling it... that doesn't work. What works however is disabling the backups NFS share. If I disable it, without even having to umount /mnt/user/discord, it now correctly points to the correct directory. If I enable the backups NFS share again? /mnt/user/discord mounted on my VM now points on the backups share. I'm so confused. Have you ever seen something like that? Is there some sort of file on unRAID I can see more info about NFS share configuration?
  3. Does this plugin support non-PWM fans? Since you can still control their speed through voltage usually. My regular fan speed is detected at 700 rpm (silent mode in BIOS) but the auto-configuration doesn't see it. I have an ASRock Rack board.
  4. Honestly, it was kinda implicit by "I don't have anything important" that my data (which is basically just my Plex library) is definitely not a big target for this attack. Not to consider that I need to connect to my VPN to even reach my Plex. If the NSA or the equivalent in Canada (the CSIS, we like them with 4 letters in Canada) want to know which movies I'm watching, I couldn't care less (except for the part where my money is paying them to watch me but that's another debate).
  5. Thanks. So depending on the workload, it can be a small or a big difference, but it's always a positive difference nevertheless. Well, thanks for the plugin, I just enabled it, will reboot tomorrow. Don't need the security fixes at all personally, I don't have anything that important.
  6. Has anyone run any sort of benchmark on this? I'm just legitimately curious.
  7. Hi, Just looking for the easiest approach here, I created a VM with virbr0 in unRAID so it's sitting in the 192.168.122.x segment which is fine. However it can still ping my LAN IPs including unRAID, but not the hostnames. Now, I would like to prevent that, since that VM is for testing purposes and I don't want it to propagate through my network. I'm aware I could mess around with my router to get the work done, but I was wondering if I could just use unRAID's iptables to make a rule to block local connections to the rest of my LAN. Since such a rule would indirectly prevent the VM from accessing its host, I'm thinking it would be safe enough for most purposes? I have no experience at all with iptables, so excuse my ignorance if such a solution is either impossible or not safe at all.
  8. And libvirt.img being on the array in ./system, losing the flash drive doesn't matter. Is there any reason to backup the full 1GB libvirt.img over just saving what's in /etc/libvirt? The actual content of that FS is very small in comparison to the 1GB IMG container. root@server:~# du -sh /etc/libvirt/ 1.0M /etc/libvirt/ root@server:~# ls -l /lib/virt/ /bin/ls: cannot access '/lib/virt/': No such file or directory Also you mentionned /lib/virt, not sure if there is anything supposed to be in there.
  9. I'm also very interested in this. I tried it, took a backup of a 20GB vdisk while the VM was running. Then I restored it (making sure to disable the use of local files in duplicati) and did a md5sum of the original and new files. And the md5sum were different, sadly. However, I was still able to boot the restored vdisk and didn't notice any difference in the installed OS/applications within the image. So... it kinda works?
  10. Hi all, I have a Duplicati docker taking care of my backups currently and I'm trying to have everything centralized at the same place as much as possible isntead of using different plugins to back up different things. So, Duplicati backs up all my vdisks already and I'm trying to configure it to take care of my VM settings (XML and nvram). So, I configured it for /etc/libvirt/qemu/ with some exclusions and it works for the nvram subfolder but not for the XMLs themselves. I noticed the permissions on the XMLs are set to 600 while the nvram files are 666 so I assume that's the issue here. Now, before changing permissions to the XML files to 644, I have some questions: Are the XMLs and nvram files in /etc/libvirt/qemu regenerated after each reboot or do changes in that folder magically stay permanent even after a reboot? (1a) If files are permanent, are permissions persistant? What about new XMLs? What would be the best approach? (1b) If files are regenerated, I could have User Script run a script to change permissions to 644 every month before Duplicati takes that backup; are there any insensitive in not changing the XML file permissions? Thank you.
  11. Funnily enough after pinning all my VMs to a full core, some switched their topology correctly to 1c2t, others still had 2c1t. Not sure how the webUI handles all of this when not using the XML form format.
  12. Sorry, that probably wasn't very clear. Basically OS handle threads differently if they belong to the same core, but for VMs it's usually just vCPUs without any way for the VM to know if the 2 threads are from the same core or not. I saw some "hacks" (ie. not limiting yourself to the unRAID webUI) to basically make sure the VM would properly use the 2 threads as a full core with HT rather than 2 distinctive threads from 2 separate cores.
  13. In fact I have 7 VMs and 1 docker. Some of the VMs barely use 100MB of RAM and any CPU at all, they could be in dockers in fact but that's another debate. I made some changes now, having all my VMs using a full core with its hyper thread. Core 0 (with HT) is pinned to nothing, Core 1 (with HT) is pinned to my two most important VMs (still light workload), Core 2 (with HT) is pinned to a VM with a higher workload and Core 3 (with HT) is pinned to my last 3 VMs which aren't that important. This way unRAID has Core 0 for itself. Now as for my Plex docker, I used something I don't see suggested at all in the FAQ and it's the --cpus=6 parameter, which gives access to 75% (6/8) of every core/thread rather than locking the docker to specific cores/threads. So assuming my Plex docker is doing very hard work, it will never use more than 75% per core, leaving at least 25% of a full core to the VMs and to unRAID itself. I think I'm covering most of my workload possibilities this way after doing some quick tests (obviously if Plex is taking 75% of everything and a VM needs its full core, they will challenge each other for CPU resources but it's fine). One last question as I was reading from some old posts that the VM couldn't know if its 2 vCPUs were a HyperThread core or just 2 random cores/threads. People were adding parameters to their XMLs to make sure KVM let the VM know it's a core with HT. Is that still necessary?
  14. Sorry if this has already been asked, I couldn't find an answer (please point me in the right direction if there is one available); I have a 4-core CPU with HT, so 8 threads total. Most of my VMs have VERY light workload and would be totally fine with 1 hyperthreading thread without needing the full core. In fact, it's been working that way for a while until I decided to question whether it was optimal or not. Let's say I have 10 VMs with the following CPU pinnings: VM 1: vCPU 0, 4 VM 2: vCPU 0, 4 VM 3: vCPU 1, 5 VM 4: vCPU 1, 5 VM 5: vCPU 2 VM 6: vCPU 3 VM 7: vCPU 6 VM 8: vCPU 7 VM 9: vCPU 7 VM 10: vCPU 7 Is there anything wrong here, assuming the VMs sharing the same vCPU never really use more than ~10% load, or is assigning a HyperThreading core to a VM without assigning the full core simply wrong (and why)?
  15. I'm really curious about this. How does it work? I'd like some more technical network advice. If the VM is the only router, how do the different peripherals still communicate between each other when unRAID is turned off? Let's say it's a 4-port PCI card with 1 port set as WAN and the 3 other as LAN. The WAN port has the modem, 2 of the LAN ports have the IPMI interface and unRAID (motherboard NIC) and the last one has the only other computer. How can that computer communicate with the IPMI interface if the tower is shut down, can communication actually go through the PCI NIC even though it doesn't have power? I'm confused.
  16. I just noticed that Python is a requirement for iotop... fair enough. Uninstalled it all and it's fine. I was wondering it it could be another plugin that required it or something but I assume it's never the case, otherwise it would be installed outside Nerd Pack...
  17. I'm currently uninstalling some of the stuff I don't need and noticed in Nerd Pack that I have python, pip, tmux and utempter installed. I don't remember ever installing these. Could they be enabled by default (and if so, why)? Also, I tried uninstalling them but when I untick Python and click Apply, it won't uninstall Python.
  18. Hey, I'm trying to standardize all my permissions across my shares but I'm still a little bit confused about what is the nobody user and its users group and what permissions to files with the nobody:users mean exactly. I noticed that all my /mnt/user/appdata/plex/* stuff have 600 permissions with nobody:users, yet the Plex docker has no issue reading/writing to these files. Does that mean that the docker engine in unRAID runs with the nobody user? What about VMs and NFS/SMB? room@vm can read/write from the NFS too even if it's 600 permissions on nobody:users. It almost seems like "nobody" is everybody... as if it was root:root but with 666 permissions. I'm definitely missing something here, any help would be appreciated as I haven't been able to find anything relevant after a quick search. Thank you!
  19. I'm currently working on my backup solution and trying to figure out a good balance between security and convinience. Unplugging a hard disk is what I would consider a 100% secure way to make sure my data isn't modified by a virus/ransomware/etc, but it gets a nice 0% note when it comes down to convinience since I have to manually plug it back in when I want to make another backup. Is unmounting a disk generally enough to prevent anything bad from happening? What else could I do (besides off site backup)?
  20. I just want to chime in to say that I installed Duplicati earlier today, configured a couple of backups, tested them and assuming the scheduler and retention work as expected... then this is by far the most complete backup solution I ever dreamed of having in unRAID. You can back up to your array, to another samba share, to a USB drive or directly into the cloud... encrypted or not... easily export your backup configs so that you can restore your data even if you lose your whole unRAID server after some catastrophe. I don't know if it's perfect protocol-wise (like, could it have better encryption, better compression, better performances and stuff like that) but it definitely does all I need to and will bring peace of mind once properly set up and tested over time (the smart retention option is a very nice business-class feature too).
  21. I can't, the board doesn't post (error code 93) if onboard is primary. The only way I was able to boot was having PCIe as primary, VGA completely disabled and IGFX enabled. I assume the PCIe card would take over the IGFX as primary but what does primary mean exactly when you're not actually using any video outputs?
  22. It's not, I only have the choice between PCIe or Onboard (ATS chip). It's currently on PCIe but I don't have any PCIe card. The IGFX option is set to Enabled.
  23. I'm now wondering... if I hook a PCIe GPU card up, will I be able to keep using the IGP for Plex while passing through the dedicated GPU to a Windows VM?
  24. Yup, edited with some CPU load graphic before/after... awesome difference. I also got to know a lot more about my BIOS and unRAID boot settings in general, I'll do some backups of those settings now seeing how important they are. I already use the User Script plugin a lot to automate stuff across reboots but the go file just seems to be the way to go for any really serious business.
  25. More update... /dev/dri appears after I do "modprobe i915". If I do it a second time, I also get output: modprobe: FATAL: Module i915 not found in directory /lib/modules/4.18.20-unRAID I guess it's normal. So all I need to do is add the modprobe command to my go file, do some chmod, add the extra parameter to the Plex docker and bam... currently transcoding with HW acceleration and 0% CPU usage. Edited the go file, rebooted again... still working. Yay. Thank you, I owe you a lot. This was quite the adventure lol. Edit: here is an image showing the CPU loads before and after while transcoding movies without throttling: Totally worth the efforts. I'm surprised how CPU hungry Plex can be if you let it be. Obviously the 60s buffer throttling option largely mitigate the CPU load eventually on both sides but still.
×
×
  • Create New...