joshbgosh10592

Members
  • Posts

    60
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

joshbgosh10592's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I accidentally filled up a share that uses only a cache pool to the point where unRAID reported I was using something like 3.3TB. The cache pool consists of one 2TB and one 3TB drive in a RAID0 format. Because of the miss-match size, I should have a usable of 4TB (2TB across both disks), right? I emptied the share down to 2.96TB. When I attempt to copy anything to the share on from any client, I receive an error, and when I try to nano via unRAID's CLI, I get Error writing test.txt: No medium found. Because of the issue with unRAID, I've ignored troubleshooting clients' issues, which is why I'm only saying that they return an error. I'm still getting this issue, even after a reboot. I can read perfectly normal from this share. Any ideas? Main page view: Balance Status of the cache pool: fdisk -l returns this for the two disks in ShareHDD (sdh and sdi): Disk /dev/sdh: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: WDC WD20EFRX-68E Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdh1 64 3907029167 3907029104 1.8T 83 Linux Disk /dev/sdi: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68E Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: CFDAB87B-8FBD-4B4F-B745-3C2DC2DF1340 Device Start End Sectors Size Type /dev/sdi1 64 5860533134 5860533071 2.7T Linux filesystem
  2. Yup, that's exactly what happened. Inside the share settings, it shows what I want, but in the share overview, it shows the old share name (You can see where the new share (Share A) I created shows "Array_cache". I toggle it to another cache pool and back, and now I'm writing to the cache correctly (well, for some reason it's moving the files in and back out through the VM instead of just moving it within the NAS. Previously, I was able to move at around 200MB/s while putting no strain on the VM NIC (something like NFS Offload). I'm assuming that's just how I have the shares mapped in the VM?) Thank you!!
  3. So, it's set to use the original cache pool from pre-6.9, however I renamed it once I created the other pools. Could the rename of the cache pool have screwed it up, even though the GUI says it's correct?
  4. I'd rather be slightly generic publicly on the share names (unless my explanation below doesn't make sense and it's just too confusing). Share A is named "T------s" in the configs (short term), and Share B is "M----s" (long term). When I'm done working the files in the short term share (Share A), I copy them to the long term (Share B). This is to prevent the spinning disks in Share B from constantly being awake (and bogged down by the work being performed.
  5. Thank you! Here it is. So this isn't expected behavior then? I know, this is me manually moving files from one share (cache only) to another share (cache enabled, using a different cache pool than original.) Thank you though nas-diagnostics-20210819-1917.zip
  6. Kind of confusing, but I'm having an issue where I try to copy a file from a share A that is set to only use cache pool A, to share B that uses cache pool B, but mover moves files to the array. Cache A is RAID5 NVME SSDs and Cache B is RAID1 NVME SSDs. When I use Windows (so, SMB) to copy from share A to share B, the copy writes directly to the array disks, which are HDDs, thus making the copy process MUCH slower. Is this expected behavior? I originally was using Share A via unassigned devices and I wrote to the cache of share B but wanted to be able to control it natively.
  7. I've also noticed the /var/log/nginx throwing the out of shared memory error... Spamming this: 2021/06/19 01:10:53 [crit] 6642#6642: ngx_slab_alloc() failed: no memory 2021/06/19 01:10:53 [error] 6642#6642: shpool alloc failed 2021/06/19 01:10:53 [error] 6642#6642: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory. 2021/06/19 01:10:53 [error] 6642#6642: *5824862 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost" 2021/06/19 01:10:53 [crit] 6642#6642: ngx_slab_alloc() failed: no memory 2021/06/19 01:10:53 [error] 6642#6642: shpool alloc failed 2021/06/19 01:10:53 [error] 6642#6642: nchan: Out of shared memory while allocating channel /var. Increase nchan_max_reserved_memory. 2021/06/19 01:10:53 [alert] 6642#6642: *5824863 header already sent while keepalive, client: 10.9.0.240, server: 0.0.0.0:80 2021/06/19 01:10:53 [alert] 27152#27152: worker process 6642 exited on signal 11 2021/06/19 01:10:53 [crit] 6798#6798: ngx_slab_alloc() failed: no memory
  8. I'm still having this issue on 6.9.1. I have to purge /var/log/syslog (.1 and .2 as well) and then /etc/rc.d/rc.nginx restart in order for the pages to start responding again....
  9. Also found this thread today after wondering why my cache pool had 4.5GB/500GB remaining. Manually clicking "Move" would make the button show as if it's moving, but upon refreshing the Main page, would show that it wasn't. Removed the Move Tuner plugin and manually kicked it off and the mover is doing it's thing. Not sure how I went this long without knowing it wasn't running.. Also on 6.9.1.
  10. I have 8 interfaces for my unRAID server, almost all of which are assigned. I just recently added a dual SFP+ NIC for 10Gb networking and bonded them (bond6) using 802.3ad (and configured the switch accordingly.) My primary IP is 10.9.220.50. For temporary testing, I set the bond6 to have an IP on the subnet, but being .51 (switch is set to native VLAN 220), enabled VLANs and assigned vlan 221, and assigned it the IP of 10.9.221.50. Earlier, I was able to get to the server using both 220.50 and 221.50 but not 220.51. i ignored it as it was only a temporary assignment until I had the time to switch the primary IP over to the fiber LAG. I was having an issue with the unRAID logs being full (6.8.3 nginx memory bug) and I rebooted... Now, I can only connect to the server using the 221.50 IP. I need the server to be dual-homed on both VLANs for latency/bottleneck reasons. I'm not sure what's going on.. I thought it could be that two interfaces shared an IP on the same subnet, so I set the bond6 220 config to have no IP and nothing changed.. Any ideas? Attached is my network settings.
  11. Because it's RAID-0, wouldn't I be able to get the full size of both disks added, since there's nothing wasted to parity? I'm trying to see how to expand it, but all documentation refers to the pool as inside /dev/. I'm assuming it would be the first disk in the pool, /dev/sdi. Looking at the results of parted list, it only shows sdi1 and nothing about sdp, and the size is only 2TB, when the original size was 4TB. I'm wondering if the pool didn't accept the replacement?
  12. So, I did more research on it and I thought that would work, but after I ran it, the size didn't seem to change - it still shows the same as the before "df -h /mnt/disks/PVEData" Thoughts?
  13. I'm curious how btrfs knows the new disk is it's replacement, because that worked! Thank you! I'm assuming to expand the btrfs filesystem (original was 2x2TB, and I just swapped one out with a 3TB), I just use this, correct? Since the total usable should be 5TB. btrfs filesystem resize max /mnt/disks/PVEData Currently, root@NAS:/var/log# df -h /mnt/disks/PVEData/ Filesystem Size Used Avail Use% Mounted on /dev/sdi1 3.7T 1.7T 2.0T 47% /mnt/disks/PVEData
  14. No worries! So then via the unRAID webUI or btrfs commands, and how? I thought that when telling unassigned devices, you only tell one of the disks in the btrfs pool to mount and ignore the other, as it'll just mount with it? Just trying to get it straight so I don't lose anything.