Kosmos

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Kosmos

  1. Hey everyone, how can I set the directory of my persistent container data to be in the appdata folder on the array? right now, the compose file is created on the boot usb device and all my paths in the compose file are relative to it. However, I'd like to store the data in the same location as the "normal" docker implementation. I found working_dir and context but received errors on trying it. Btw. can I circumvent the error "Configuration not found. Was this container created using this plugin?" when updating the container from the gui? Best regards
  2. Probably, yes, I will ask SanDisk about this. Thanks again for your continued help! See you on the next one ๐Ÿ˜› p.s: you may flag this topic solved, I can not do it, because I missed creating a new one and took over from johnsanc (๐Ÿ˜‡)
  3. In my opinion, it could be both. It appears, the partition information is lost at some point, so Unraid does not detect there is a cache partition on the ssd. As a consequence, nothing can be decrypted. However, the problem appears without encryption as well. I tried to change the sata controller and cable as well, but it didn't help either. Anyway, I wonder why this particular disk is recognized correctly sometimes, but sometimes not after reboots. So either unraid is not reading the disk correctly, or the ssd is resetting/deleting it's partition (headers) sometimes for unknown reasons... I attached a screenshot of the unassigned drives after the problem occured (encryption lock symbol and partition gone) and a SMART report of this ssd as well. server-smart-20210915-1213.zip
  4. Too bad, I guess the problem might be that the docker vdisk can only be read very slowly when it's stored on a busy HDD. I solved the problem by moving it to a SSD cache instead.
  5. It should be decrypted only after entering the password and starting the array, right? However, Unraid doesn't show the encrypted volume properly before starting the array, already (after boot) Also, when usind the "failing" ssd in another pool (single, not raid) it's working properly. The combination of the "failing" ssd with a different HDD continued to fail, but the combination of the other "working" ssd with the other HDD did not fail after many reboots. (logs attached) So it seems to me that this particular disk is not working properly in a (encrypted) btrfs raid1 (pool). Could it be due to the pcie -> sata addon card they are attached to (all 3)? I may try to change ports or use the pool without encryption. 1a_before-reboot-diagnostics-20210830-2036.zip 1b_after-reboot-diagnostics-20210830-2100.zip 2a_before-reboot-diagnostics-20210901-1613.zip 2b_after-reboot-diagnostics-20210901-1735.zip 3a_before-reboot-diagnostics-20210901-2343.zip 3b_after-reboot-diagnostics-20210901-2351.zip
  6. I managed to get logs before and after rebooting when the problem occured (attached). This time, I created a new pool (different name) from the same ssd drives just before it happened. With the old name, it did'nt happen in the last 10 +- 2 reboots. So maybe it's a cache management issue after all? Best regards after-reboot.zip before_reboot.zip
  7. Hey everyone, thanks for sticking with me. In the meantime, I moved my hard drives to completely new hardware. The problem persists: Sometimes after reboots, the encryption symbol of the second cache disk goes missing and when the array starts, it's throwing the "missing disk" errors. Furthermore, I have a feeling that it appears more often when there was data written to the disk before rebooting - maybe a coincidence and not a causality... When I changed the disk order in the pool, the disk went missing as well, unfortunately losing all its data. This leads me to the conclusion that the ssd (controller) is damaged. I will continue to try and grab logs before and after a reboot with the problem occuring.
  8. Hey, did you ever find out what's the issue with the slow ui during heavy hdd loads? Best regards
  9. Thanks for looking into this ๐Ÿ˜ƒ I will try to get the logs in this order (after my vacations). From the current point of view, do you think it's some hardware failure? Best regards
  10. Alright, it just started happening again. Here is a log. The thing is, it only occurs after restarts - not while running, so I can't really provide logs from before. One thing i noticed is the encryption symbol (green lock, left from the pool name) was there before the reboot and now is gone. So maybe it's some issue with the drive header being corrupted? server-diagnostics-20210716-1300_again.zip
  11. I tried this previously as suggested by some other post (and intended to point out with "When setting up the pool (again) everything works fine"), but it didn't help making the pool consistent over reboots/time. The last time I tried this, the problem reoccured after a while. I checked the SSD but it seems OK, data is stored persistently for a week even without power. server-diagnostics-20210714-1857-newpool.zip
  12. my bad, you can find them attached to the previous post
  13. Greetings everyone, I do have a similar problem, but there is a key difference: Only after rebooting the server, my cache pool device can not be reassigned. So my server starts throwing this error message every minute: Cache pool BTRFS missing device(s). Then of course the "pool" continues on a single drive only. Logs below. Consequently, I can not run the balance command to raid1. The pool size is displayed as the sum of both drives, which should not be the case, free space is displayed correctly, though. When setting up the pool (again) everything works fine with raid1 and r/w access. Curiously, the missing device is displayed as the one continuing to work. But maybe thats just for identification of the respective pool. When I'm using the SSDs individually, they work as intended. They are even attached to the same SATA controller. I'm happy to get your opinions. Best regards UNRAID Version: 6.9.2 server-diagnostics-20210714-1540.zip
  14. Hey, I do have the same issue, so editing the config might help to remove it and get the system back running. But I still want to pass through the GPU, so did you find a solution to get this done properly? Btw. you may want to add the [solved] flag to the title
  15. Hey, thanks for your replies ๐Ÿ˜ƒ Actually, yes. IP seems working fine. So your guess with Master Browser might be correct. When I go to SMB settings, master browser election works quite fast. 1\ I tried, but nothing changed. 2\ yes, they are and all caps. 3\ yes, I see all of the computers right away. Besides, there are some "other", "multimedia" and "network infrastructure" devices. How would I do that? Is there some Windows tool for that? Anyway, I wouldn't mind about the initial delay, if the connection wouldn't drop from time to time. Any idea about that? Best regards ๐Ÿ˜ƒ
  16. Hey everyone, I think I'm having a weird issue with my SMB Shares and I couldn't find anyone describing the same problem. There are several topics about access restrictions regarding the smb version or other authentication problems. Also, I've read about the Lanman settings in gpedid, but all of the solutions apply if you couldn't connect to the shares at all. My problem is a little different, but easy to describe: The first time I try to access the network share from Windows 10, I get an error: \\server could not be accessed (network path was not found), BUT: After approx. 30 seconds, it suddenly works fine. After a while, the connection drops (even when streaming files) again for about 30 sec. Repeatedly. The same problem from all Windows 10 machines. From Ubuntu, it works instantly, so I guess it is a Windows -> SMB thing. Disk spin down should therefore not be the reason, either. Also, in the Windows "Network" folder overview, the server appears right away. Any other connection, such as web interface, docker access, etc. do work smoothly and without interruptions. I'm glad if someone has a suggestion. If you need more info, please let me know ๐Ÿ˜ƒ Best regards
  17. Hey everyone, I think I'm having a weird issue with my dockers, and I couldn't find anyone describing the same problem. What I'm trying to do: Connect container A through a privoxy container B (vpn) to the internet. Error message (in firefox docker) is "Proxy server refused connection" In the privoxy docker logs no connection attempts show up. Strangely, this is the only "thing" that's not working. All of the following scenarios worked just fine: - Connecting container A to the internet via B with network "none" and --net=container:privoxy option - Connecting container A to the internet via another privoxy container. - Connecting container A to the internet directly. - Connecting a virtual machine to the internet via privoxy container B. It didn't matter if I set the proxy system-wide or in firefox only. Also, all of the above is happening when configuring the privoxies' network either as bridged with port mappings, or on custom:br0 with individual IP and standard ports. The same result for http and socks5 proxy. Enabling host access to custom networks (docker settings) didn't change anything, as suggested by the privoxy docker maintainer. He said my config should be ok, as it's running perfectly well on his machine... Actually, when I deleted the privoxy B and installed it from scratch, it worked as inteded for the first time I tried. After, it returned to the described behavior... Setup: A: several dockers, e.g. firefox B: hideme privoxy (https://hub.docker.com/r/alturismo/wg_hideme_privoxy) VM: Windows 10 I'm glad if someone has a suggestion. If you need more info, please let me know ๐Ÿ˜ƒ Best regards
  18. This would actually be the thing I'm looking for. I would like to assign one gpu to multiple VMs. Can we somehow make this work? Maybe as an Unraid plugin? rel: https://github.com/DualCoder/vgpu_unlock
  19. I wonder why nobody replied here. Anyway, what I think is that you can indeed transfer the key to the new drive since the old one is unusable anyway. Furthermore, if you would hypothetically boot from the old USB, it would probably fall back to a trial version or you would simply not be able to start the array, unless you talk to the support. Regarding your files, nothing should happen to them before starting them in an array. So if you don't remember your old assignment order, it would likely be best to mount them with unassigned devices or on another computer and backup the data. After, you could try to simply start the old array on the new USB. All your settings and docker/vm configs will be gone, though the data will likely be there (depending on your previous settings) Good luck ๐Ÿ˜ƒ