Jump to content

srfnmnk

Members
  • Posts

    196
  • Joined

  • Last visited

Converted

  • Personal Text
    test

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

srfnmnk's Achievements

Explorer

Explorer (4/14)

7

Reputation

2

Community Answers

  1. I have a high-iops pool formed by a bunch of heterogenous ssds (different brands, sizes, etc). This is an awesome idea but my write performance is only about 18MB/s (using fio test) but my read speed is > 680MB/s. I am wondering if I have a specific disk that is going bad or has some very slow speeds. How can I test read/write performance to each specific ssd? I'd prefer not to dismantle the pool. Is there some way to use the /dev/sdag or something to perform a read / write test? Thank you. ``` fio --name=randwrite --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --direct=1 --size=512M --numjobs=4 --runtime=60 --group_reporting --directory=/mnt/ssdpool ```
  2. yeah I already restored and it worked fine. This is more just about my curiosity at this point. I learned what I didn't know, hence the questions.
  3. Yeah when the server crashed some of the files got corrupted / deleted. I didn't know super.dat had it, I think disk.cfg used to have it. I looked and looked for it. Is there any way to open super.dat?
  4. We had a major power spike X2 during a storm the other day and both times it was powerful enough to blow out a UPS resulting in server restart. The second failure I assume happened during boot...either way, the configs got borked and I had to restore /boot/config to get things working again. One of the major issues was that my disk configs were all missing and none of the 24 disks were assigned anywhere. Throughout this process I noticed that the disk.cfg was basically empty even in historical backups. By empty I mean, all the values seem empty, attached for reference. Even now, when it's all restored and working fine, the disk.cfg is still relatively empty...where is the drive to disk mapping stored? I'm on 6.12.4. Additionally, where are the docker config xmls stored? Thanks disk.cfg
  5. Easy question -- I am wanting to clean up my main page as I have > 10 years of dead drives in the historical devices section. If a drive is dead (and as such is showing up as historical) and the drive will never be added back to the array (since it's already been recycled) is there any issue with deleting these historical devices to clean up? Thanks
  6. Understood -- I'm just saying that this is a bug and the community would love to see this resolved.
  7. I did read through this forum post -- it looks like leaving the browser open can result in this issue. I did leave a browser with it open. Please help us escalate this issue, leaving a browser on a web page overnight should not result in a complete crash of the nginx server. restarting nginx seems to have resolved the issue. Thank you.
  8. Ok so today -- I logged in and received this little present. The ui was blank-ish. Looking at the nginx logs I see the following. I attached the full log. Why is nchan_max_reserved_memory running out? How to fix it? gth=1 HTTP/1.1", host: "localhost" 2023/11/09 09:26:26 [error] 19497#19497: MEMSTORE:00: can't create shared message for channel /disks 2023/11/09 09:26:27 [crit] 19497#19497: ngx_slab_alloc() failed: no memory 2023/11/09 09:26:27 [error] 19497#19497: shpool alloc failed 2023/11/09 09:26:27 [error] 19497#19497: nchan: Out of shared memory while allocating message of size 21383. Increase nchan_max_reserved_memory. 2023/11/09 09:26:27 [error] 19497#19497: *550567 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_len gth=1 HTTP/1.1", host: "localhost" 2023/11/09 09:26:27 [error] 19497#19497: MEMSTORE:00: can't create shared message for channel /disks 2023/11/09 09:26:28 [crit] 19497#19497: ngx_slab_alloc() failed: no memory 2023/11/09 09:26:28 [error] 19497#19497: shpool alloc failed 2023/11/09 09:26:28 [error] 19497#19497: nchan: Out of shared memory while allocating message of size 22385. Increase nchan_max_reserved_memory. 2023/11/09 09:26:28 [error] 19497#19497: *550573 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/update2?buffer_l ength=1 HTTP/1.1", host: "localhost" error.log
  9. ok will try this next time if it happens again. thanks
  10. as far as I know, there was only one browser window open from one PC.
  11. Hi, I woke up this morning to my web ui being down. I restarted nginx and still I had the issue. I've attached the diagnostics for review. I also looked in the nginx logs to see what I could find and found the following. My server had 32G+ of mem available as per free -h command. I rebooted the server and the UI is working again, but I'd like to understand what happened and how to avoid this in the future. Thank you. pumbaa-diagnostics-20231108-1007.zip
  12. Right, I had it as /mnt/user/appdata and it was doing this same thing so I went ahead and checked each box explicitly hoping it would change something, it did not.
  13. Hi @KluthR -- I'm still having some issues with the new appdata backup. Sorry if I'm missing something but below I've added a screenshot of the output of my old backups using CA_Backup plugin and another with the output of the new appdata backup. Am I doing something wrong? Notice how many /appdata/config folders don't seem to be getting backed up. I've also attached the config for you. Thank you. config.json
  14. Thank you so much for all your help @ghost82. I went ahead and upgraded the kernel now that I have internet and virtio is still not working for vdisk or network. I also reinstalled qemu agent but still nada. Any ideas why virtio still won't work? Should it? sudo apt full-upgrade sudo apt install -y --reinstall qemu-guest-agent
×
×
  • Create New...