Jump to content

sonofdbn

Members
  • Content Count

    279
  • Joined

  • Last visited

Everything posted by sonofdbn

  1. IIRC, unRAID didn't have a cache drive feature initially. The reason I liked unRAID was the protection provided by the parity drive, and I think there's a case to be made that this was the main selling point of unRAID at the time. When the cache drive feature came out I decided not to use it for caching writes (and still don't) because I wanted the assurance that once I had written data to the server it was parity-protected. Sure, the risk of cache drive failure before the data is moved is low, but for me the peace of mind outweighed the speed gain.
  2. Yes, I know there are actually quite a few alternatives, but this one seems suitable for my use case.
  3. Has anyone successfully installed a GoogleHomeKodi (henceforth GHK) docker? GHK is a way to control Kodi using Google Home, and one component is having a GHK docker (there are alternatives to this component). I found the docker on DockerHub via community applications, but it doesn't come with any template. I fiddled around trying to add parameters manually, but it's way above my level of inexpertise. I did find a template (by @CHBMB) but don't know how to use that with the docker installation. While I could play around a bit more, I thought I'd ask here first, hoping this will save me a lot of time.
  4. Good to know. Any experience with SSDs? My Googling didn't turn up anything that looked credible, and the specs of my new Sandisk SSD don't mention power consumption at all.
  5. So does this mean that since 2 disks could need 4 to 5 amps during spin up, I shouldn't use any SATA power splitter at all for hard disks? What about 1 hard disk and 1 SSD, or 2 SSDs?
  6. Thanks so much for the answers; time to get a new drive. (On my current setup, metadata takes up 5GB on each disk.)
  7. I'm on 6.5.3 and run a two disk (SSD) btrfs RAID1 cache pool. One disk is 525GB and one is 1TB. My question is what is my pool size (in terms of how much data I can store on it)? My guess is that with the two disks I have, and if I'm using RAID 1, my pool size is actually 525GB. I'm thinking of increasing the size of my cache pool, and I'd like to replace the smaller drive with a 1TB drive. Will that increase my pool size to 1TB? I read that btrfs also stores some metadata: does that take up a lot of space?
  8. Thanks - everything fixed now. As suggested, I stopped the container, set the Nextcloud share to use cache disk and ran the mover. After the mover was done, there were no more Nextcloud files on the cache disk. I remembered to set the share to not use the cache disk before restarting the container 😉
  9. Is this a Nextcloud configuration issue or a docker setting issue? My Nextcloud share has "Use cache disk" set to No. In the docker settings I have container path /data set to /mnt/user/Nextcloud. I thought perhaps it might have something to do with something raised on the first page of this topic (see below), but I don't know how to fix this. When I set up the data folder I probably used /data instead of a Nextcloud specific folder (which I think is suggested in the linked image above). If this is the problem, is there any way of reconfiguring this to keep files off the cache? Now that I think of it, I'm sure I used @SpaceInvaderOne's video to set up Nextcloud, and on looking at it again, I see that originally the Nextcloud share is set to use the cache disk. So it's likely that I did that initially and then set it to not use the cache disk later. So if the share is now set to not use the cache disk, does this mean that no new files will be written to the cache? Will updates to files already on the cache be kept on the cache?
  10. I'm running 16.0.1 (I think I started with 14.0), and recently found that I have data files on my cache drive at /mnt/cache/Nextcloud. I didn't expect this because my Nextcloud share is set to not use the cache drive, and the files are taking up a lot of my cache drive. I'd like to get the files off my cache drive. Can I use, say MC, to move the files to a data drive which already has a Nextcloud share folder on it? e.g., /mnt/disk1/Nextcloud without messing up my Nextcloud docker? Is there another or better way of fixing this? Also, how do I prevent files from being written to /mnt/cache/Nextcloud in future?
  11. Thanks. I didn't realise that's what happened with magnets; and now I see the torrent files are in the session folder.
  12. I have a lot of .meta files in my download directory. Is it safe to delete them? If it is relevant, I usually use a magnet file to start a download, and I notice that the download starts with a META file before switching to the actual file. I currently have no such META files downloading, but there are still a lot of .meta files in the download directory.
  13. Not thinking of VLAN tagging. I only have a vague idea of what it is. All the devices are close enough to connect to a single switch. I do have a bunch of switches in various places all linked in a bit of a mess, but for now I just want to understand if there are any real benefits to having larger switches rather than smaller ones, where possible.
  14. Not directly related to unRAID, but I thought this would be a good place to ask. I have one 5 port network switch daisy-chained, if that's the right phrase, to an 8 port switch, and am running out of ports. (The current setup wastes two ports linking the two switches.) Would it be better to replace the 5 port switch with an 8 port one, or just go for a 16 port switch to replace the current two? Where I am, a 16 port switch is about 25% more than 2 x 8 port switches. Is there any significant speed or other benefit in having one big switch instead of one daisy-chained to another? If it's relevant, the current setup is (in part) Router -> 8 port switch (on another floor) -> 8 port switch -> 5 port-switch
  15. @jang430, I also have a few Linux Mint VMs on my unRAID server, also no GPU and no sound hardware. I want to access the VMs from my Windows 10 PC, but I was unable to get any sound via any remote desktopping, but thanks to @itimpi I am now trying the ac97 emulation. VNC is, as you say, fast, but I can't get any audio. I managed to use Windows Remote Desktop (after a bit of fiddling around in Mint), but YouTube performance was a little choppy, although audio was fine (for me). Best was NoMachine, smooth video and audio.
  16. You can check Corsair PSU cable compatibility on the Corsair website.
  17. I'd be happy with an incremental parity check: for example, if I have an 8TB parity disk, I would be able to check 2TB every night for 4 nights (not necessarily consecutive nights). Alternatively, run parity check for x hours at a time. In both cases, resume where the last check ended. This seems like the simplest parity check feature to add, since no monitoring of the system is required for throttling. It's like an abbreviated version of the current parity check.
  18. I managed to get my VMs back relatively easily. It seems that the error message "operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd" refers to a missing nvram file in /etc/libvirt/qemu/nvram. The relevant file (which I had fortunately backed up) has a string of charcters in the front and then '_VARS-pure-efi.fd'. This string of numbers corresponds to the UUID in the VM's XML file. (A brief aside: one thing that I got stuck on for a while happened while I was using Midnight Commander (mc) from a terminal on my Windows PC. I wanted to use mc to copy the .fd file to /etc/libvirt/qemu/nvram. In mc the nvram files had an asterisk in front of the name. I've learned to be very scared of unusual things in Linux (so basically most of Linux for me as a Windows user). A lot of Googling only described filenames with asterisks after their names and mc discussions only mentioned colours of filenames. Worryingly, the .fd files in /etc/libvirt/qemu/nvram didn't have this asterisk. I took the bold step of just copying the file, and everything seemed OK. After a bit more guesswork it turns out that the asterisk represents an executable file, which is a bit weird for what I thought an nvram file would be. My guess is that this is a result of the backing up of the nvram files which somehow added the executable flags.)
  19. I'm trying to understand what passthrough means on a VM, in particular for a graphics card (GPU). I get that the GPU can be assigned exclusively to a VM (while the VM is running). And my reading of various threads tells me that to get optimal use out of the GPU I should connect my monitor directly to the GPU with a cable. This would be my natural understanding of "passthrough". However, somewhere I got the impression that it's also possible to get at least some of the benefit of the GPU using remote access over Ethernet. So, for example, if I had a GPU passed through to a VM, and accessed the VM using Remote Desktop or Splashtop, would I see any visual improvement? By improvement, I mean snappier desktop performance (not expecting bare metal performance); gaming capability would be a bonus, but not essential. Also, would it be possible to get the benefit of the GPU audio out over remote access? Or do I have to be physically connected? (My motherboard doesn't have onboard audio.)
  20. Thanks for the suggestion, but I think it's unlikely to be a full cache drive problem. I have around 200GB free on the cache drives and very rarely go below 100GB free. Fortunately I have backups of the VM image files and the XMLs and when I have a bit of time I'll work on restoring the VMs. I didn't have a libvirt.img backup, but I'll make sure I back that up in future. Still a bit confused about this file, as it seems to be recreated when booting up.
  21. (I posted earlier in the Docker Engine forum, but I don't think that's the right place, as this is more about VMs.) I'm on 6.5.3 and have 4 VMs, which were previously all running OK. I was fiddling around trying to install virt-manager on my LinuxMint VM and when I rebooted the unRAID server I found that I had a blank VM tab (no Add VM buttons or anything else). Based on looking at many forum threads, I checked to see that all the paths in VM Settings were correct (including Libvirt storage location specifying the file, not just the folder). I don't think I had changed anything, so it didn't seem like an incorrect path could be the cause of the blank tab. But then I noticed that the Libvirt storage location was /mnt/cache/libvrt.img, which it has been all along (I checked previous diagnostics files). On changing this to /mnt/user/libvirt.img (note "user" instead of "cache" and spelling of libvirt), I managed to get the VM tab to show the Add VM and other buttons, although no VMs were listed. As an experiment, I tried to create a new Windows 10 VM, just using the default template. What surprised me was that my existing Win10 VM came up, even though I had made some minor changes to the XML file for that existing VM. Fortunately I had a backup of that customised XML, so I shut down the "new" VM and then overwrote the XML entries with the backed up ones. On restarting I got an error message: "operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd". Unfortunately I can't recall exactly what I did after that, but I definitely didn't edit the XML file since I have no idea what most of the entries do. In the end I just restarted the VM (perhaps after rebooting the server) and now my Win 10 VM is working just as before. Now I'm going to try restoring the other VMs. But if there's a better way of restoring the VMs, I'd appreciate some pointers. (Also any suggestions as to why the VMs went AWOL in the first place - so that I can avoid this happening again.) tower-diagnostics-20190110-2122.zip
  22. I have the same problem of not having the file /etc/libvirt/libvirtd.conf. A brief history: I have four VMs, which were working fine under 6.5.3. I tried to install virt-manager on a Linux VM following SpaceInvader One's video, which also requires editing libvirtd.conf. While following the video I was able to edit the file, which was in the expected /etc/libvirt folder. (I also enabled nc-1.10 from NerdPack, as set out in the video.) But after rebooting unRAID, my VM tab was blank. Sometimes it takes a few minutes for the VMs to show up, but after a long wait there were still no VMs listed. (I've checked the paths in VMSettings, and I'm sure they're correct - and everything was working previously.) I thought I had messed up the editing of libvirtd.conf, but when I looked for it, I found that the /etc/libvirt folder was empty. I saw that there was also a /etc/libvirt- folder (note the minus sign) and inside that folder there is a libvirtd.conf file (without any change to the "listen_addr" line). I have no idea why the file is in this other folder and whether this is expected behaviour. So @disruptorx, how did you get it working? tower-diagnostics-20190110-1722.zip
  23. I don't recall doing anything complicated to get to my Win10 VM when I installed it. Go to the unRAID GUI VM page, start the Win10 VM and then click on the VM icon and choose VNC Remote from the menu. That should get you into the Win10 setup process. (If things aren't working, take a look at SpaceInvaderOne's video on installing a Windows VM on unRAID.)
  24. Just saw this. Does this mean you're running a MacOS VM on your Threadripper unRAID box? If so, what's the performance like? And what software are you using on the Surface 3 for remote access to the MacOS VM?