• Posts

  • Joined

  • Last visited

Everything posted by sonofdbn

  1. I have the same problem of not having the file /etc/libvirt/libvirtd.conf. A brief history: I have four VMs, which were working fine under 6.5.3. I tried to install virt-manager on a Linux VM following SpaceInvader One's video, which also requires editing libvirtd.conf. While following the video I was able to edit the file, which was in the expected /etc/libvirt folder. (I also enabled nc-1.10 from NerdPack, as set out in the video.) But after rebooting unRAID, my VM tab was blank. Sometimes it takes a few minutes for the VMs to show up, but after a long wait there were still no VMs listed. (I've checked the paths in VMSettings, and I'm sure they're correct - and everything was working previously.) I thought I had messed up the editing of libvirtd.conf, but when I looked for it, I found that the /etc/libvirt folder was empty. I saw that there was also a /etc/libvirt- folder (note the minus sign) and inside that folder there is a libvirtd.conf file (without any change to the "listen_addr" line). I have no idea why the file is in this other folder and whether this is expected behaviour. So @disruptorx, how did you get it working?
  2. I don't recall doing anything complicated to get to my Win10 VM when I installed it. Go to the unRAID GUI VM page, start the Win10 VM and then click on the VM icon and choose VNC Remote from the menu. That should get you into the Win10 setup process. (If things aren't working, take a look at SpaceInvaderOne's video on installing a Windows VM on unRAID.)
  3. Just saw this. Does this mean you're running a MacOS VM on your Threadripper unRAID box? If so, what's the performance like? And what software are you using on the Surface 3 for remote access to the MacOS VM?
  4. Ah, OK! I see it's been happily logging stuff for over a year. Thanks.
  5. I noticed that in my docker config path there is a file "access.log" which seems to be updated continuously. The docker is working fine, but that access.log file is now over 8 GB in size. Is this normal? I have about 100 torrents, but typically fewer than 20 are active, if that helps.
  6. Sorry, perhaps not really a docker question, but I saw some odd behaviour when I used this docker to copy files from a Samba share to the array. I copied a folder with 10 files in it and on the array, four of the files had the "Modified" timestamp updated to the time of copying, while the others retained their original timestamp (all from the year 2002). The files are all photos or videos, and the biggest file is 1.1 MB. I would have thought that either every file or no file gets the timestamp modified. I'd prefer to keep the original timestamps. Is there any way of ensuring this?
  7. To answer my own question from a few posts ago, I needed to use "abc" instead of "www-data". The occ command needs to be run from the /config/www/nextcloud directory, and I needed to add the username as a parameter for files:scan. (I initially got a message saying "The current PHP memory limit is below the recommended value of 512MB". Some quick googling and appending "memory_limit = 512M" to php-local.ini fixed this.)
  8. I've installed this Nextcloud docker with the intention of replacing OneDrive. (Nextcloud doesn't have the same functionality, but it seems to be close enough for my purposes.) However, copying data from my Windows PC is taking forever because the PC first has to download many of the files and then they get copied to my unRAID server, and I'm doing this through the NC Windows client, which is also a little slow. I do have the OneDrive files backed up on a Synology NAS and I thought it would be easy enough to copy those files to the NC folders on the unRAID server and then use the Windows NC client to sync for the latest changes. However, it's now clear to me that NC doesn't just "see" those files and they have to somehow be made visible (which they would be if they were copied via the client). Digging around, there seems to be an NC command "occ" that would allow NC to scan the relevant unRAID folder and make the copied file visible to NC. Unfortunately I can't invoke the occ command. I went into the terminal for the NC docker, and tried to run "sudo -u www-data php occ files:scan", where www-data is the "HTTP user and group", according to the NC documentation, for certain Linux distros. My problem is I don't know what I should use instead of www-data for the NC docker (assuming I'm even on the right track). If I use www-data I get an "unknown user" error message. Can anyone help? (Plan B is to copy the files from the Synology NAS to the NC folders on the unRAID server, and then reinstall the MariaDB docker. But this is a one-time solution, if it even works.)
  9. I stumbled across the new forum site on my Android tablet using Chrome, and had to log in, but of course had forgotten my password. So I tried to recover the password and found that I had also forgotten the email I signed up with. The frustrating problem is that there's a screen that allows you to enter a new email address, but a) the text is unreadable because it seems to be white on white and b) it seems - if I squint very hard, that the letters entered are randomly added to the text box. Now on retrying this morning, the text is a not much better very light grey on light pink, and squinting shows that the text entry seems to be OK. (I'm on the forum now because fortunately Chrome on my PC remembered the password.)
  10. Thanks so much for the help. Ran an extended SMART test and the disk passed. So I shutdown, changed the cable and restarted. Now I get a nice notification telling me that "array turned good. Array has 0 disks with read errors."
  11. While I was away there was a power glitch and fortunately the UPS shutdown the array gracefully. But on rebooting, I got CRC errors and disk read errors on one of my data disks. From reading around, it seems that CRC errors are possibly not too serious. However, disk read errors might be an issue. My diagostics: Should I replace the drive? If it's recommended that I replace the disk, which is 4TB, can I just swap in an 8TB drive and rebuild?
  12. Same problem with my new MX500. Returned to normal after 30 minutes.
  13. Well, I did some digging around and I came up with an inelegant way of putting ffmpeg into the container. My Linux skills are very limited, so I googled around and also read the Minimserver documentation about the stream converter (in my case ffmpeg) here. The first thing was to get a command prompt in the container. I opened a TTY window on my Windows PC and logged on to the unRAID server. Then I got into the docker container by using docker exec -it docker-minimserver /bin/bash I hunted a bit and found the minimserver directory under /opt. Under that minimserver directory is another opt directory, and I think I had to create the bin sub-directory under that. So that gave me the opt/bin directory I had to put ffmpeg in (/opt/minimserver/opt/bin). Next, I had to get ffmpeg, and found a static build here. I used the x86_64 build; initially I wasn't sure how to get it into the container, but since I had already mapped the /media folder to my music folder, I just needed to get ffmpeg into the music folder. So I downloaded the package ffmpeg-git-64bit-static.tar.xz and unzipped it (used 7-Zip) then copied ffmpeg into the music folder. Then in the terminal window I copied ffmpeg from the /media folder into /opt/minimserver/opt/bin. (I actually took a longer route, but this is the gist of what worked.) After that I opened the properties window from Minimwatch, went to System and entered ffmpeg for stream.converter and clicked Apply, and Minimserver found it (it appears as opt/bin/ffmpeg in my case) and now I'm happily transcoding flac to wav24. I'm not sure if Minimserver needs to be restarted for it to see the ffmpeg, and when I first tried, it didn't appear to find ffmpeg but after a few seconds I think the error message saying Minimserver couldn't find ffmpeg disappeared. No doubt there is a better way of doing this (and I'm not sure if this will survive a restart), but I'm happy it's working. Now I can shutdown my Linux Mint VM which I used only to run Minimserver.
  14. Wow, I haven't been keeping track of this, and now things have moved on so much. Thanks so much, EdgarWallace. I've successfully installed the docker following your instructions. A wonderful Christmas present! Er, of course now I want more... I've installed Minimstreamer as well, as I normally use this to convert flac to wav24. But to do this I also need to install ffmpeg, presumably somewhere where Minimstreamer can find it. Any idea how this can be done?
  15. OK, I think I get it now. I took a look at various N-gamers 1-CPU videos and I need to have the physical connections for the "real" VM, and not the remote desktop one, if that makes sense.
  16. Thanks for the very useful videos. I'm not sure if this was addressed in the videos, but for the gaming VM, what are the physical connections? The second video still seems to show the VM running under Splashtop. Would the gaming be done via Splashtop? This seems unlikely to produce near bare metal performance. The way I'm thinking about it, for Splashtop or, say, RDP, I just need a PC that is on the same network as the unRAID server (or can "see" it over the Internet). But the monitor of the PC I'm using would be connected to whatever graphics device I use on the PC, and is therefore totally independent of any graphics card I have on the server. So if I want to use the VM for gaming, would I need a separate monitor, mouse and keyboard (both connected directly to the server via passed through devices)? I'm sure I'm missing something quite fundamental here, but I hope someone can explain what needs to be done.
  17. With WD Data Lifeguard, you can also run two (and presumably more) instances. I've done it successfully with two 8TB HGST Deskstars, and doing it now with another two. I don't know if there is a performance penalty, but I suspect it still takes much less time than running the tests consecutively.
  18. I run 6.3.5 with a dual parity setup and 6 data drives. The two parity drives are 4TB and 8TB (so naturally the largest data drive is 4TB), and I want to replace the 4TB parity drive with an 8TB one. Is there anything special I need to do? Here's what I plan to do: 1. Do parity check 2. Stress test new drive (and since this will be a parity drive, there's no need to preclear) 3. Stop array and power down 4. Replace the old (smaller) parity drive with the new larger one 5. Power up 6. Start the array -> rebuilding of new parity drive starts Some questions: a. Can the array be used while rebuilding parity? (By "used" I mean running VMs and dockers and general reading/writing.) b. Is it necessary/recommended to do a parity check after the rebuilding? c. I run the server off a UPS. What would happen if there is a power failure and the UPS triggers a shutdown during parity rebuilding?
  19. I've used the WD Data Lifeguard tool with both HGST and Seagate drives without any problems. (Since HGST is, I believe, owned by WD, perhaps that's not surprising.)
  20. Thanks, that makes sense. Although I don't think some of the possibilities apply to my particular case (the battery is supposedly about one year old), it's clear that LOTS of things might go wrong. I think shutting down as soon quickly is a good approach. That's less inconvenient than running through a parity check (or being tempted not to).
  21. I'm running my unRAID box from an APC UPS (Back-UPS CS650), connected via USB and the server is set to shutdown after running off the UPS after the battery level reaches 25% or there are 10 minutes left. When the power trips, usually from lightning, it usually works as I would expect: everything keeps running off the UPS until power is restored, or else after a while it shuts down cleanly. But occasionally, like today, it doesn't shutdown properly, and the behaviour seems odd. Firstly, there was some lightning and some of the lights went off. I reset some fuses (no more than 5 minutes) and then went to my PC (which runs off another UPS) which was in sleep mode and resumed with no problem, and went to the unRAID GUI. The Main page showed that the array was stopped and there was a message saying that there had been an unclean shutdown and the start button suggested that I should run a parity check, which I'm now doing... only another 20 hours to go.... Why would there be an unclean shutdown when I'm sure there was more than enough battery capacity for the few minutes when there was no power? Can dockers or VMs prevent a clean shutdown? If the UPS battery did run out before a clean shutdown, shouldn't the unRAID box just be powered off and not reboot itself?
  22. Two things that DSM 6.1.2 (and earlier) have that I can't easily do on unRAID: encrypted shared folders and run Minimserver. (And there's something to be said for the hardware: I have a DS414 and installing drives physically is so easy with the drive trays. I haven't found anything as simple for a home-built machine.)
  23. MinimServer is a UPnP music server; I'm trying to install a MinimServer docker ( I haven't done a "manual" docker install before, but after a bit of reading I managed to download it via the CA tab and fiddled around mapping paths and ports, and in particular setting Network Type to "Bridge". Somewhat to my surprise (and a testimony to the amount of information available here), I managed to get MinimServer running. It required going to MinimServer's http page and accepting a licence agreement, but after that it seemed to be OK, as its web ui said it was running (and the log showed this as well). The problem was that I couldn't access it from any PC on my network. MinimServer is controlled via MinimWatch, which can be installed on a Windows PC. I installed it, and it had no problem finding a MinimServer test server that I was running on a Synology NAS. But I couldn't get it to find the docker MinimServer. (I tried shutting down the Syno server and reinstalled MinimWatch, but that didn't help.) I'm wondering if the problem is with UPnP (which I know very little about) and whether it somehow needs to be enabled on unRAID. I read a thread here on the Logitech Media Server docker which mentioned a similar-sounding UPnP problem. It was suggested there that Bridge mode should be used for Network Type, so I tried that, but it seems to remove (or not require) any port mapping then. So I think I misunderstood what is required. Is something else required in order to enable UPnP? (And is that the likely problem?)
  24. Thanks so much! It was indeed the File Integrity plug-in! I tried various ways to suspend the plug-in and also tried killing the various sha256sum processes, but new ones kept on popping up. In the end I removed the plug-in and bingo! Speed immediately up and now at about 40MB/s. Important note: I was probably not using the Dynamix File Integrity plug-in correctly, as I never spent any time setting it up carefully, so I certainly wouldn't generalise and claim in any way that the plug-in doesn't work properly.
  25. That makes sense. Is there a way of seeing what's causing all the activity? Using a top command and also looking at the Open Files plugin, it seems that most of the ongoing processes are sha256sum - and yes, there are files on at least disks 2 and 5. That might be related to the Dynamix File Integrity plug-in, but it's set to not start anything while parity-checking is going on. Perhaps these are slowly completing (but 9 hours!). Is it safe to kill a process if under Open Files it shows that for that process there are 0 files that "may prevent shutdown"?