Seriously_Clueless

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by Seriously_Clueless

  1. I am running since a long time an Unraid system and love it. Thanks for all the hard work that improves it over time. Now I would like to system to run in a Windows 10+ vm ROEBOT (https://www.roebot.net/) which is a game robot for a game called Rise of Castles (formerly Rise of Empires). The script requires the Windows environment to also run a Android emulator and here are the choices: LD Player 9.0.13+ LD Player 5 MEmu So basically I try to run in an Android Emulator VM inside the Windows VM and that is where I run into issues. None of the Android emulators want to install or fail starting due to running on a VM. Does anyone here have an idea how I can solve this dilemma? The server is running 24/7 so being able to run this would be super appreciated. If not doable in a VM what alternatives does one have without setting up another PC which somewhat defeats the purpose of running the server in the first place. Thank you very much for any feedback in advance this is super appreciated.
  2. While we are patiently waiting for the Unraid team to advise on how they would like to tackle ths going forward (i.e. not using BTRFS in the docker img file) I have done the following to reduce my 96G to now around 40G. I might be able to get it down further but wanted to be cautious: 1. sudo btrfs subvolume delete /var/lib/docker/btrfs/subvolumes/<name of subvolume> This will delete the subvolume and free up the space. I did it on a name basis since I wanted to keep some of the subvolumes, if you want to delete all, use * instead of the name of the subvolume 2. btrfs scrub status -d /var/lib/docker/ Shows you how your btrfs volume looks like 3. btrfs scrub start -d /var/lib/docker/ Think of this like a defrag for the btrfs volume if you run btrfs --help you can see a few more commands that should help with listing the subvolumes instead of looking at the directory itself. You will have to have the docker service started for this otherwise the /var/lib/docker will not contain anything. It is the mounting point for the docker.img file. And now my folder looks like this: root@NameOfServer:/var/lib/docker# du -h -d 1 . 440K ./containerd 77M ./containers 0 ./plugins 41G ./btrfs 39M ./image 56K ./volumes 0 ./trust 88K ./network 0 ./swarm 16K ./builder 88K ./buildkit 1.4M ./unraid 0 ./tmp 0 ./runtimes 41G . So 50G were recovered without any issue.
  3. The Garuda Linux discussion does discuss a fix but again, one would expect that to be default in Unraid. So if any of the devs are here listening, it would be great to get your feedback on this. When going through the docker documentation they clearly point out the deficits of btrfs with docker. Maybe the decision to use btrfs was a bit rushed? if not, how are we supposed to securely fix this without deleting the docker.img file?
  4. I did look at my docker.img file to better understand the underlying issue and discovered that it was filling up in my btrfs folder with root@NameOfmyServer:/var/lib/docker# du -h -d 1 . 472K ./containerd 71M ./containers 0 ./plugins 93G ./btrfs 45M ./image 56K ./volumes 0 ./trust 88K ./network 0 ./swarm 16K ./builder 88K ./buildkit 1.8M ./unraid 0 ./tmp 0 ./runtimes 93G . As you can see the ./btrfs folder holds all the data. Here is a glimpse on the content of that btrfs folder: drwxr-xr-x 1 root root 114 Apr 19 16:59 f425bd8572d449ab63414c5f6b8adf93a5a43830149beff9ae6fd78f4c153964/ drwxr-xr-x 1 root root 164 Jun 2 16:24 f56871468a4410ae08e7b34587135c9f53e7bc5f7e6d80957bc2c1bb75dc5835/ drwxr-xr-x 1 root root 114 Jul 22 2020 f59dccf9b84afc4e4437a3ba7f448acbe9d97dec1bbc8806740473db9be982e7/ drwxr-xr-x 1 root root 144 May 22 13:28 f955be376b3347302b53cc4baf9d99384f5249d516713408e990a1f9193e6a91/ drwxr-xr-x 1 root root 210 May 17 08:58 f9ff6f9552b835c1ee5f95a172a72427d2a94d97221300162c601add05741efc/ drwxr-xr-x 1 root root 188 May 17 08:58 fa15712ed8447e23f26f0377d482ef12614f4dbd938add9dc692f066af305892/ drwxr-xr-x 1 root root 114 Apr 19 17:06 fb42f9a61aa57420a38522492a160fe8ebfbacd3635604d06416f2af3d261394/ drwxr-xr-x 1 root root 226 May 17 09:09 fd6c83a4ab776e1d2d1de1ed258a31d0f14e1cc44cfc66794e13380ec84e7e7d/ drwxr-xr-x 1 root root 392 Jun 2 17:26 fe1e7c51852590ee81f1ba4cd60458c0e919eec880dc657b2125055a0a00e305/ drwxr-xr-x 1 root root 252 Jun 2 17:26 fe1e7c51852590ee81f1ba4cd60458c0e919eec880dc657b2125055a0a00e305-init/ drwxr-xr-x 1 root root 230 Jun 2 16:13 ff1f7c71fff4b8030692256f340930e61c2fc8ec67a563889477b910f9ae1ece/ So I looked around and found this discussion forum: https://github.com/moby/moby/issues/27653 Which describes the buggy nature of docker on BTRFS. Awesome ... In the forum there is a link to this git page: https://gist.github.com/hopeseekr/cd2058e71d01deca5bae9f4e5a555440 Here is another forum talking about the issue: https://forum.garudalinux.org/t/btrfs-docker-and-subvolumes/4601/6 I don't recommend any of you to follow these actions but what I am trying to say it that the Unraid devs should look into this and come back with a solution on how users like us can safely remove these orphan images from our system without risking data loss. So what would be the proper procedure to get this raised to Unraid? It is clearly a docker bug with btrfs but Unraid decided to move to btrfs for the cache drive and we followed that direction. Thank you.
  5. The links that this docker repository is using to download the required binaries are no longer in existens, so unless the owner of the docker repository updates the links to deemix this docker is dead.
  6. Quick question: I have a working Unraid server and need to replace the parity disk with a larger HDD. Am I correct in my assumption that the parity disk is just parity information and an encryption is not available/needed due to that? all my data drives are encrypted.
  7. Why would you need that, in this docker included is an pack and unpacking tool for pretty much anything under the sun including 7z which is the linux version of 7zip https://wiki.archlinux.org/index.php/P7zip Highlight the file you would like to compress/extract and go to the "file" dropdown menu and select pack/unpack. That simple.
  8. Correct, this docker is arch and not alpine, sorry for the statement.
  9. This docker uses Arch Linux to provide you the Krusader Explorer. If you like to view pictures I suggest you install ristretto into the docker. It is a lightweight image viewer made for Xfce desktop. 1. Enter the Unraid GUI and go to Dashboard. Click on the krusader icon and chose Console 2. type in the terminal window that opens: pacman -S ristretto 3. return and then Y and return again 4. Once the command is done close the window 5. Restart the docker If you would like to view the pictures now, simply right click in Krusader and select open with. You can also make it the default in the Krusader settings for the image types you like. Please keep in mind that when you update the docker you will have to reinstall the app again. I usually install mediainfo and ristretto into the docker so I have some additional functionality. It works fine. Hopefully this answer is helpful.
  10. Same issue here but the way I managed to get 20.04 desktop installed is to choose safe mode instead of the normal install. That worked without issues.
  11. Thanks but would a rsync of a running array cache drive provide a usable backup? I am concerned that i.e. the dockers might run and then the backup might not be usable afterwards? Do I have to backup each vm and each docker separately and ensure they are offline when doing so? Thank you. If there is a link that explains this in more detail, happy to read if anyone can point me in the right direction.
  12. Turns out it was a Chromium issue and when I used Firefox all worked fine. What a waste of time. Sorry for anyone who also encountered this issue.
  13. Had recently a complete failure of the cache drive on my server and am in the process of reinstalling everything. Want to get my Win10 gaming VM back and am stuck at the very first stage. I click on create a Win10 VM, give it 1 core (not the first one), give it some ram and point it to the new virtio drivers as well as a new win10 iso from MS Windows website. The vdisk is 300g auto, again nothing special. Creating it works fine without any errors. When I start the VM, VNC refuses to connect. Attached the logs, I am at a loss at this stage and don't understand why the build in VNC doe not want to connect. All my docker images work fine, but VMs dont. Tested also a Linux ISO for fun but that dint work either. Restarted the server just to see if there is anything else but that did not resolve the issue either. Any help is much appreciated. Thank you. Seems I am not alone with this issue, here is another user that has the exact same problem with 6.8.3: But no solution. tower-diagnostics-20200612-1736.zip
  14. Unfortunately none of the suggestions yielded any benefits. I had to wipe the cache drive. What a bummer especially since I tried following guidance doing it in the first place. The new Raid 1 cache pool is now created. How can I backup my cache drive pool on a regular basis so if this would happen again I have a perfect copy of it on my array?
  15. Hi all, Here is what let to a corrupt cache drive on my Unraid installation: I have two cache drives, 1x1tb and 1x 2tb. Since they are mismatched I ended up buying another 1tb to replace the 2tb. So I googled and followed this post to the . https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=511923 It worked out fine but the pool size was 1.5TB even after the drives swapped. I moved all files to the btrfs cache again without any error message. I was a bit wondering since the cache 1 drive was the only one that had writes to it, not the cache 2 drive even so they are in a RAID1. Then I stopped the array after it was finished and restarted unraid. After the restart the cache drives is showing: root@Tower:~# btrfs check --readonly /dev/nvme1n1p1 Opening filesystem to check... bad tree block 559374450688, bytenr mismatch, want=559374450688, have=0 Couldn't read tree root ERROR: cannot open file system root@Tower:~# btrfs rescue super-recover /dev/nvme1n1p1 All supers are valid, no need to recover root@Tower:~# btrfs rescue chunk-recover /dev/nvme1n1p1 Scanning: DONE in dev0 open with broken chunk error Chunk tree recovery failed Thank you in advance for any advise. Please let me know if there is any way I could recover the missing files. Hopefully I can recover the cache since I really would hate loosing all my docker and vm data. Thank you.
  16. Without any details in terms of network setup, config and so on it is impossible to help. As you can see in my post, you should provide similar details. Also ensure you include the router settings, especially when you have activated the router security features. That should help pinning the issue you are facing down.
  17. thank you for the reply so, yes indeed it is set. I am wondering if it has to do with the routing table from Unraid: IPv4 default 10.0.0.1 via br0 210 IPv4 10.0.0.0/24 br0 210 IPv4 172.17.0.0/16 docker0 1 IPv4 192.168.122.0/24 virbr0 1 Do I need to add or delete anything here? The 192.168 is from my old network when I initially set up Unraid, then I moved it into the house network which is 10.0.0.0/24. I also read someone stating to add a route IPv4 10.0.0.0/24 eth0 1 to get it to work but since I dont understand the details and it was a very old forum topic I am wondering if I should do that, I dont want to render Unraid useless as you can imagine.
  18. Error: So I really looked at various references on this forum as well as various plex docker support forums but I cant figure out how I can get remote access working. The Netgear Orbi router is set to port forward 32400 to the internal 10.0.0.29 IP. The router also assigns 10.0.0.29 always to my unraid server. The unraid server has the ip, I checked and the settings in network are: eth0 Enable bridging yes IP v4 10.0.0.29 Plex docker is set to: Network Mode Host 32400 port is enabled with TCP The Plex docker is stating in the log files: Dec 02, 2019 21:18:21.933 [xxx] DEBUG - Detected primary interface: 10.0.0.29 Furthermore port manually set in the Plex GUI under the remote access to 32400. Greatly appreciate if anyone has advise on how to fix the remote access. Thanks in advance.
  19. lol, thank you, what a simple solution, should have thought about that before, always good practise to not allow root doing all of that. Thank you, added a user and all works.
  20. I run the "New permissions" tool (media share only) but still can not write with the root user to the smb share. Should I try to run the "Docker Safe new perms" as well?
  21. Thank you, will check and report back. I am sure it is permissions which would be a bummer since I used the rsync command from another command here on the board: rsync -aHAXvhW --no-compress --progress --info=progress2 --stats /source/ /mnt/user/media/ & disown Only thing I added was disown since it enables me to disconnect cli after initiated.
  22. I just moved a file on the array in terminal on the machine and that worked so I guess it is a smb permission issue that I cant move files.
  23. Not sure what happened but the array is suddenly read only. Last night I did a move from the cache drive to the array and this morning it is read only. I did stop the array and started again but still don't understand the issue. One further comment: I used "Unassigned devices" plugin to mount a disk, used rsync to copy the data to the array media share, and had the cache enabled. I ended up with all directories but only some of them had content after the mover script moved the data from the cache drive to the array. This is pretty disappointing so I just disabled the cache drive for this share. Nevertheless I don't understand why it behaved that way. Any help getting the array back to read and write is massively appreciated. tower-diagnostics-20190817-0244.zip