realies

Members
  • Posts

    158
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by realies

  1. upgraded from 6.9.2 to 6.10.2 and a bunch of containers broke
  2. @iilied, does the latest update fix it?
  3. Multi-hop setups with Mullvad do not work. Presume due to a restriction on the "Peer tunnel address" field to have an IP from a different address pool than the "Local tunnel network pool" space.
  4. Another change, this time breaking, from a few minutes ago. All environment variables are now capitalised. This means the old pgid and puid environment variables have to be changed to PGID and PUID.
  5. Just to let you know, the Docker image has just been refactored and updated. It now uses the latest ubuntu image and noVNC package, TigerVNC to scale according to the browser viewport, deprecates the `resolution` and `resize` configuration parameters, and adds parameters to reconfigure the default umask, VNC server password and time zone.
  6. Where did you install qm from? Edit: recreating the VM also fixes it, as mentioned above.
  7. @JoeUnraidUser, thanks for figuring it out, will try it out with lower case day of the week.
  8. Setting scripts to run at a custom cron interval (e.g. 0 6 * * MON) does not work. Is there a missing dependency?
  9. @ridge, if you have a look at the readme in the repository, the variables for setting user and group ID are not in capital letters.
  10. @cen you might have swapped the container/host ports. https://docs.docker.com/config/containers/container-networking/ Also, the folder mapping seems wrong. Please refer to the README.
  11. @cen, are you sure you are trying to access the container at the configured port?
  12. Upon a manual stop/start, the disks had lost their assignments. This is the second time disk assignments have been lost due to a hang. The hardware is: AMD Ryzen 1700 @ 3.8GHz AsRock X370 Taichi Crucial 4 x 16GB ECC @ 2400MHz NVIDIA GeForce GTX 1050 Ti AMD Radeon RX 580 Kingston SV300 120GB Samsung 850 EVO 500GB Samsung 960 PRO 512GB 32TB Mixed Drive Array Corsair HX1000i Fractal Design Define R5 The RX 580 and NVME drive are assigned to the Windows 10 VM, could the AMD card be related?
  13. @iilied, you might want to reduce the auto-save period to a lower value than the default 60 to make Soulseek save its config. @ridge, just pushed an new build allowing for correct ownership using the pgid/puid environment variable pattern. Volume mount mapping and File Sharing settings within Soulseek need to be updated.
  14. @limetech any plans for something like this to be implemented?
  15. @jonathanm, apologies for using the wrong unRAID terminology and flooding the topic unnecessarily. At the bottom of my last post there's also: Updating all posts accordingly.
  16. Thanks for pointing out the correct unRAID terminology. I have not used 82% of the available space and it is impossible to use 18.9 TB of a 9 TB total array size. Before adding the new drives the total space used percentage was at 55%, when the new drives were added it jumped to 82% during the clearing stage.
  17. @Benson, nice generalisation, although for this use case it would be great if disks are pre-cleared at maximum speed. It would stress each drive to its full potential and maximise the chance of reporting early drive mortality that can happen during this stage. @BRiT, the writes are fluctuating up and down and during the clearing stage when new drives are added to the array, nothing is being written to the parity drive (0.0 B/s). @itimpi, absolutely sure I mean pre-clearing*. This occurs automatically when a new drive is added to the disk array. Another thing that can be observed is that drives that have underwent the clearing step are still waiting for the remaining new drives to finish clearing before they are mounted to the array (Disk 5). In my view drives that have successfully been pre-cleared have to be mounted to the array without waiting for remaining drives to complete. *seems like people refer to pre-clearing when the drive is cleared before being assigned to the array, so in unRAID terms, I mean clearing
  18. Just added two new drives, one of them is 5400rpm and the other 7200rpm. Noticed that the clearing write speed fluctuates identically between the new drives (±0.5 MB/s). Wondering if this is a bug, a feature or just the current state of the clearing component. Is it not possible to max out the write speed of each drive independently?
  19. Disk array size and free size calculation seems to be wrong during the pre-clearing of new drives in the array.
  20. Not sure what do you mean with the hub, but the PSU has to be connected to the system via USB in some way.
  21. @LintHart, generally all PSUs that work with Corsair Link should work with this plugin as long as their hardware ID is added to the supported devices.
  22. Can confirm that using the CLI tool I have reverted the bios on Asrock Taichi X370 back to 5.10 successfully. Don't think many of the comments about flashing new bios via bare metal Windows is not safe because the tool prepares the bios for flashing within Windows and upon restart the bios if validated and flashed via a component from the existing bios.
  23. This does not work for Asrock x370 Taichi that is at BIOS 5.50 when trying to downgrade to 5.10.
  24. I was only able to downgrade from 5.60 to 5.50 on X370 Taichi (https://www.asrock.com/mb/AMD/X370 Taichi/index.asp#BIOS) and it did not change the situation. Downgrading to a lower BIOS version is not possible and is described here https://forum.level1techs.com/t/attention-amd-vfio-users-do-not-update-your-bios/142685 @limetech is it possible that the next version of unRAID has the patch mentioned in the topic above? The diff can be seen here https://clbin.com/VCiYJ. According to the users from the L1 forums, this is a fully working fix. Any tips on applying this before a new unRAID release ware welcomed.
  25. While 'btrfs restore -v /dev/sdX1 /mnt/disk2/restore' managed to recover most of the data, it seems 'btrfs check --repair /dev/sdX1' managed to restore everything. Yet to validate for any data corruption, so far it looks all good. Many thanks @johnnie.black!