• Posts

  • Joined

  • Last visited

  • Days Won


realies last won the day on December 31 2017

realies had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

realies's Achievements


Apprentice (3/14)



  1. Where did you install qm from? Edit: recreating the VM also fixes it, as mentioned above.
  2. @JoeUnraidUser, thanks for figuring it out, will try it out with lower case day of the week.
  3. Setting scripts to run at a custom cron interval (e.g. 0 6 * * MON) does not work. Is there a missing dependency?
  4. @ridge, if you have a look at the readme in the repository, the variables for setting user and group ID are not in capital letters.
  5. @cen you might have swapped the container/host ports. Also, the folder mapping seems wrong. Please refer to the README.
  6. @cen, are you sure you are trying to access the container at the configured port?
  7. Upon a manual stop/start, the disks had lost their assignments. This is the second time disk assignments have been lost due to a hang. The hardware is: AMD Ryzen 1700 @ 3.8GHz AsRock X370 Taichi Crucial 4 x 16GB ECC @ 2400MHz NVIDIA GeForce GTX 1050 Ti AMD Radeon RX 580 Kingston SV300 120GB Samsung 850 EVO 500GB Samsung 960 PRO 512GB 32TB Mixed Drive Array Corsair HX1000i Fractal Design Define R5 The RX 580 and NVME drive are assigned to the Windows 10 VM, could the AMD card be related?
  8. @iilied, you might want to reduce the auto-save period to a lower value than the default 60 to make Soulseek save its config. @ridge, just pushed an new build allowing for correct ownership using the pgid/puid environment variable pattern. Volume mount mapping and File Sharing settings within Soulseek need to be updated.
  9. @limetech any plans for something like this to be implemented?
  10. @jonathanm, apologies for using the wrong unRAID terminology and flooding the topic unnecessarily. At the bottom of my last post there's also: Updating all posts accordingly.
  11. Thanks for pointing out the correct unRAID terminology. I have not used 82% of the available space and it is impossible to use 18.9 TB of a 9 TB total array size. Before adding the new drives the total space used percentage was at 55%, when the new drives were added it jumped to 82% during the clearing stage.
  12. @Benson, nice generalisation, although for this use case it would be great if disks are pre-cleared at maximum speed. It would stress each drive to its full potential and maximise the chance of reporting early drive mortality that can happen during this stage. @BRiT, the writes are fluctuating up and down and during the clearing stage when new drives are added to the array, nothing is being written to the parity drive (0.0 B/s). @itimpi, absolutely sure I mean pre-clearing*. This occurs automatically when a new drive is added to the disk array. Another thing that can be observed is that drives that have underwent the clearing step are still waiting for the remaining new drives to finish clearing before they are mounted to the array (Disk 5). In my view drives that have successfully been pre-cleared have to be mounted to the array without waiting for remaining drives to complete. *seems like people refer to pre-clearing when the drive is cleared before being assigned to the array, so in unRAID terms, I mean clearing
  13. Just added two new drives, one of them is 5400rpm and the other 7200rpm. Noticed that the clearing write speed fluctuates identically between the new drives (±0.5 MB/s). Wondering if this is a bug, a feature or just the current state of the clearing component. Is it not possible to max out the write speed of each drive independently?
  14. Disk array size and free size calculation seems to be wrong during the pre-clearing of new drives in the array.
  15. Not sure what do you mean with the hub, but the PSU has to be connected to the system via USB in some way.