• Posts

  • Joined

  • Last visited

Everything posted by joshbgosh10592

  1. I accidentally filled up a share that uses only a cache pool to the point where unRAID reported I was using something like 3.3TB. The cache pool consists of one 2TB and one 3TB drive in a RAID0 format. Because of the miss-match size, I should have a usable of 4TB (2TB across both disks), right? I emptied the share down to 2.96TB. When I attempt to copy anything to the share on from any client, I receive an error, and when I try to nano via unRAID's CLI, I get Error writing test.txt: No medium found. Because of the issue with unRAID, I've ignored troubleshooting clients' issues, which is why I'm only saying that they return an error. I'm still getting this issue, even after a reboot. I can read perfectly normal from this share. Any ideas? Main page view: Balance Status of the cache pool: fdisk -l returns this for the two disks in ShareHDD (sdh and sdi): Disk /dev/sdh: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: WDC WD20EFRX-68E Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdh1 64 3907029167 3907029104 1.8T 83 Linux Disk /dev/sdi: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68E Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: CFDAB87B-8FBD-4B4F-B745-3C2DC2DF1340 Device Start End Sectors Size Type /dev/sdi1 64 5860533134 5860533071 2.7T Linux filesystem
  2. Yup, that's exactly what happened. Inside the share settings, it shows what I want, but in the share overview, it shows the old share name (You can see where the new share (Share A) I created shows "Array_cache". I toggle it to another cache pool and back, and now I'm writing to the cache correctly (well, for some reason it's moving the files in and back out through the VM instead of just moving it within the NAS. Previously, I was able to move at around 200MB/s while putting no strain on the VM NIC (something like NFS Offload). I'm assuming that's just how I have the shares mapped in the VM?) Thank you!!
  3. So, it's set to use the original cache pool from pre-6.9, however I renamed it once I created the other pools. Could the rename of the cache pool have screwed it up, even though the GUI says it's correct?
  4. I'd rather be slightly generic publicly on the share names (unless my explanation below doesn't make sense and it's just too confusing). Share A is named "T------s" in the configs (short term), and Share B is "M----s" (long term). When I'm done working the files in the short term share (Share A), I copy them to the long term (Share B). This is to prevent the spinning disks in Share B from constantly being awake (and bogged down by the work being performed.
  5. Thank you! Here it is. So this isn't expected behavior then? I know, this is me manually moving files from one share (cache only) to another share (cache enabled, using a different cache pool than original.) Thank you though
  6. Kind of confusing, but I'm having an issue where I try to copy a file from a share A that is set to only use cache pool A, to share B that uses cache pool B, but mover moves files to the array. Cache A is RAID5 NVME SSDs and Cache B is RAID1 NVME SSDs. When I use Windows (so, SMB) to copy from share A to share B, the copy writes directly to the array disks, which are HDDs, thus making the copy process MUCH slower. Is this expected behavior? I originally was using Share A via unassigned devices and I wrote to the cache of share B but wanted to be able to control it natively.
  7. I've also noticed the /var/log/nginx throwing the out of shared memory error... Spamming this: 2021/06/19 01:10:53 [crit] 6642#6642: ngx_slab_alloc() failed: no memory 2021/06/19 01:10:53 [error] 6642#6642: shpool alloc failed 2021/06/19 01:10:53 [error] 6642#6642: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory. 2021/06/19 01:10:53 [error] 6642#6642: *5824862 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost" 2021/06/19 01:10:53 [crit] 6642#6642: ngx_slab_alloc() failed: no memory 2021/06/19 01:10:53 [error] 6642#6642: shpool alloc failed 2021/06/19 01:10:53 [error] 6642#6642: nchan: Out of shared memory while allocating channel /var. Increase nchan_max_reserved_memory. 2021/06/19 01:10:53 [alert] 6642#6642: *5824863 header already sent while keepalive, client:, server: 2021/06/19 01:10:53 [alert] 27152#27152: worker process 6642 exited on signal 11 2021/06/19 01:10:53 [crit] 6798#6798: ngx_slab_alloc() failed: no memory
  8. I'm still having this issue on 6.9.1. I have to purge /var/log/syslog (.1 and .2 as well) and then /etc/rc.d/rc.nginx restart in order for the pages to start responding again....
  9. Also found this thread today after wondering why my cache pool had 4.5GB/500GB remaining. Manually clicking "Move" would make the button show as if it's moving, but upon refreshing the Main page, would show that it wasn't. Removed the Move Tuner plugin and manually kicked it off and the mover is doing it's thing. Not sure how I went this long without knowing it wasn't running.. Also on 6.9.1.
  10. I have 8 interfaces for my unRAID server, almost all of which are assigned. I just recently added a dual SFP+ NIC for 10Gb networking and bonded them (bond6) using 802.3ad (and configured the switch accordingly.) My primary IP is For temporary testing, I set the bond6 to have an IP on the subnet, but being .51 (switch is set to native VLAN 220), enabled VLANs and assigned vlan 221, and assigned it the IP of Earlier, I was able to get to the server using both 220.50 and 221.50 but not 220.51. i ignored it as it was only a temporary assignment until I had the time to switch the primary IP over to the fiber LAG. I was having an issue with the unRAID logs being full (6.8.3 nginx memory bug) and I rebooted... Now, I can only connect to the server using the 221.50 IP. I need the server to be dual-homed on both VLANs for latency/bottleneck reasons. I'm not sure what's going on.. I thought it could be that two interfaces shared an IP on the same subnet, so I set the bond6 220 config to have no IP and nothing changed.. Any ideas? Attached is my network settings.
  11. Because it's RAID-0, wouldn't I be able to get the full size of both disks added, since there's nothing wasted to parity? I'm trying to see how to expand it, but all documentation refers to the pool as inside /dev/. I'm assuming it would be the first disk in the pool, /dev/sdi. Looking at the results of parted list, it only shows sdi1 and nothing about sdp, and the size is only 2TB, when the original size was 4TB. I'm wondering if the pool didn't accept the replacement?
  12. So, I did more research on it and I thought that would work, but after I ran it, the size didn't seem to change - it still shows the same as the before "df -h /mnt/disks/PVEData" Thoughts?
  13. I'm curious how btrfs knows the new disk is it's replacement, because that worked! Thank you! I'm assuming to expand the btrfs filesystem (original was 2x2TB, and I just swapped one out with a 3TB), I just use this, correct? Since the total usable should be 5TB. btrfs filesystem resize max /mnt/disks/PVEData Currently, root@NAS:/var/log# df -h /mnt/disks/PVEData/ Filesystem Size Used Avail Use% Mounted on /dev/sdi1 3.7T 1.7T 2.0T 47% /mnt/disks/PVEData
  14. No worries! So then via the unRAID webUI or btrfs commands, and how? I thought that when telling unassigned devices, you only tell one of the disks in the btrfs pool to mount and ignore the other, as it'll just mount with it? Just trying to get it straight so I don't lose anything.
  15. I'd have to do all of that to tell btrfs to use a different drive in one of my unassigned devices btrfs pools? I don't have a cache in that. I'm confused.
  16. I figured there would be a way to clone to a larger disk and just resize the pool afterwards? Regardless though, how would I tell the pool that instead of sdj, use sdp? Thank you so far!!
  17. I know this is a very old post, but I'm just now having an issue where I need to use ddrestore myself. While looking around, I found THIS PAGE that seems to be helping me. Hopefully this will help future Googlers.
  18. Thank you! I'm working on cleaning anything off that pool that I can, but how would I swap the failing with the new? The replacement drive is larger than the existing, but I'm assuming that's just a btrfs resize command.
  19. is there a way to flag the repair as something like "accept loss" when it comes across dsta it cant read? Im just figuring im missing something as its not really failing, but rather its run out if sectors to write to (as far as I understand), so theres probably a file or two that is corrupted and im willing to accept that.
  20. So, I ran this last night and it was progressing pretty well (about 10% an hour). However, it's still not finished and it's stuck on 94.5%, and in UD, it shows "command timedout" for sdj, and the "Current pending sector" count is climbing like crazy (yesterday it was 148, right now it's at 2171). Is there something special that I should have done instead of the normal replace because of the errors the drive was throwing?
  21. Thank you! As a note for future me/Googlers, I used btrfs replace start -f /dev/sdj1 /dev/sdp1 /mnt/disks/PVEData Where sdj is the failing drive and sdp is the replacement drive. I also learned that you can not have the trailing "/" or it'll error saying: ERROR: source device must be a block device or a devid
  22. Same issue here... Still on 6.8.3 and don't want to upgrade to an -rc branch as I don't have enough time to provide feedback on it. /etc/rc.d/rc.nginx restart temporarily solved it, but we'll see for how long.
  23. I have a RAID-0 btrfs array using unassigned devices that I configured via command line. I have now received two warnings about Current pending sector showing 62, and then again at 148, so I'm assuming it needs to be removed. I'm trying to follow steps here for replacement, but it seems to assume the drive has completely failed, and the steps are for a RAID1. I'm assuming it should be somewhat similar, but I don't want to lose the data. It's not important, but I'd rather not. The directions say to use something similar to btrfs replace start 7 /dev/sdf1 /mnt Obviously replacing /mnt with the real mount path. I'm also assuming my path should be /mnt/disks/PVEData (since that's the name of the btrfs share? Thanks!
  24. There's gotta be something we can do, as now that my transcode cache is set to /tmp, the size hasn't changed at all, not even during even bigger batches. I just don't know enough to look at... Thank you though! Hopefully someone else will come across this thread and help us out.