Forusim

Members
  • Content Count

    71
  • Joined

  • Last visited

Community Reputation

4 Neutral

About Forusim

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @primeval_god Thanks for clarification. Non-intuitive is an understatement to say at least. Maybe there is a good reason, why it is working the way it is, but I find my use case reasonable. I have now solved it, like you suggested in the entrypoint.sh.
  2. From the linked reference I understood, that you can "put" files into the volume during the build time via Dockerfile. Somehow this only works only during the runtime, but then I do not see a point in volume declaration in Dockerfile and can directly map it on docker run. However I want to have all required files routed to one location and map in the Unraid gui only this one /config location.
  3. Hello, I appologies, if my question is too nooby, but I was not able to figure out, what I am doing wrong. I am running Unraid 6.9.2 and was mostly a docker user, but not a developer. I want to create a custom docker, which pulls some python apps from git (not my repo) and run it there. This python app has some hardcoded config / logs paths, which I would like to route via a volume to a persistent store on /mnt/cache/appdata/my-docker. I tried it via Dockerfile, like it is described here, but the created files are not initialy in the volume, when I start the docker con
  4. + for writing Chia plots to a second array without paritiy. Initially I had the idea to make a "no cache" share on multiple disks with "most free" allocation. But wrting the 101 GB plot to parity protected array is slowed down by 50-70%, which takes too much time in parallel plotting. So the only reasonable way of plotting is against unassigned devices and rotate the destination disks in the plotting script.
  5. Are there any benefits in migrating from btrfs docker.img to directory? Will it use less space, because a fixed 20 GB allocation is not required anymore? Is it possible to transfer all the data in existing docker.img to docker directory?
  6. Somehow I messed up the device names in UD with my formatting attempts. The disk, which was initially Dev 1 - ST8000AS0002-1NA17Z_Z840E2KD (sdh) is now only sdj. This would not bother me, but now I cannot spin down the disk anymore (the green dot is not clickable for this disk). And when I click on the sdj link to attributes and SMART info, it only display "Can not read attributes" sections. How can I fix this to initial behaviour?
  7. As of Unraid 6.7 or 6.8 XFS is formatted with reflink=1, which uses a lot of space (e.g. 69GB on 10 TB disk). I found this feature is not worth the space and keep formatting my new disks with custom approach as described in the linked thread. Since I run out of SATA ports, I would like to add additional disk via USB & UD. I was hoping to format them with same command as I did for the array. mkfs.xfs -m crc=1,finobt=1,reflink=0 -l su=4096 -s size=4096 -f /dev/mdX However UD refuses to mount from such disk and only offers me to format it. Is there a way to trick UD
  8. You may try the netdata docker. What docker are you using for chia? Does it forward the gui to an outside port or you operating via console only?
  9. I would like to thank everybody, who helped me in this thread. Cooler Master accepted my RMA and sent me a new PSU, unfortunately a lower tier MWE 550 Gold V2. From my research this one has quiet bad reviews, so I decided not to risk my other components. I purchased a new PSU - Seasonic Focus PX 550W, which now is quietly powering my NAS.
  10. My PSU has single 12V rail and is semi-modular with 3x3 SATA connectors. I used 2x3 for my 6 disks for about a year now without issues. Tested also with 3x2 (3 cables), but the results are same, if I read test with dd on more than 4 disks, the system crash + reboot. It seems like that PSU dies exactly before 5 years warranty (purchased 02/10/2016). Hope that Cooler Master will accept the RMA.
  11. I did some stress tests on my systems: Mprime ran without issues for half an hour in hard test mode - no issues. Diskspeed.sh ran without issues for all disks - each disk sequentially. Then I tried to simulate parity check load with parallel call of dd: (dd if=/dev/sdd of=/dev/null bs=1G count=1 iflag=nocache) & (dd if=/dev/sde of=/dev/null bs=1G count=1 iflag=nocache) & (dd if=/dev/sdf of=/dev/null bs=1G count=1 iflag=nocache) & (dd if=/dev/sdg of=/dev/null bs=1G count=1 iflag=nocache) 4 out of 6 disks can run in parallel, when I add more -> crash + reboot
  12. Crashes occur even in safe mode without docker or VM running. The trigger is highly likely the parity check - usually it crash directly on start, but sometimes few minutes after start.
  13. Unfortunately I do not have a spare PSU to test. Can I simulate a high load on PSU without running parity check? I already removed the UPS and plugged the server directly into the socket - still crashing. Temps are at 40 °C as per Temp plugin and fan are running as usual. My current state is, that I can mount the array with old flash drive (UNRAID not forcing a parity check here). The last time I started the parity check it ran for 15 minutes and checked 154 GB (1.5 %) with 179.0 MB/sec - out of a sudden crashed + reboot.