Jump to content

LAS

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by LAS

  1. Having ocational issues with the GUI, I've added the following to my .zshrc for a quick fix to most issues. # Unraid GUI shortut unraid-gui () { if [[ "$1" = "nuke" ]]; then unraid-api restart /etc/rc.d/rc.nginx restart else /etc/rc.d/rc.nginx "$1" fi } To start nginx if it's down, /etc/rc.d/rc.nginx start usually does the trick (using the above function, I just type 'unraid-gui start')
  2. Did you find a solution for this? Just recently upgraded from 8th-gen to 12th-gen i5. Managed to get UHD770 showing up in the VM, though I'm only able to boot as long as I have unraid virtual Graphics enabled as well. I've tested Parsec Virtual display, and github.com/itsmikethetech/Virtual-Display-Driver with no luch, getting the same error when attempting to connect using parsec. Edit: Add Amyuni USB Mobile Monitor to that as well; no luck.
  3. I really loved seeing this pop up on CA. Having about 45 containers running on my main server, I've really been missing a way to clean up my dashboard a bit. Few things though. The folder reports an update is ready when running containers from local self-built images. I'd really love to see numbers on the folders on Dashboard, eg. "2/3 started" on "Web" Attached my debug if needed. debug-DOCKER.json
  4. It would seem it is the issue, yes. Everything has been stable now, when I've not been pushing the disks too hard. Think I'll be replacing the disks with Ironwolf, the whole Ironwolf-series seems to be CMR. What actually happens is explained quite nicely on https://superuser.com/a/1691665 Thank you @JorgeB for being patient with a complete newbie in this field of computing. It has been some interesting days of learning of how transfers and caching works.
  5. Restarting the array seems to fix the issue. Transfering in max 80GB bulks, with small breaks to let the disk settle, I've now transferred another 1TB without any more issues.
  6. Found some recommendations of disabling the parity drive on the initial data transfer, combined with setting Direct IO to Yes, I was getting consistent 110MB/s transfer speeds. ...Untill I hit the 3.2TB mark on my 8TB BarraCuda Disk1, tranfers stalled. Direct transfer to drive instead of share gave the same results, stalled. Attempts to read/download from Disk1, it stalls as well. Excluded Disk1 from the share, Disk2 spun up, back at full speed. Rebooted the server, I'm now able to both read and write to Disk1 at good speeds. Tested writing 63GB of data to ensure it wasn't all RAM cache (32GB total RAM), 110MB/s consistent. Would there be anything I've overlooked that could cause this behaviour, or do I simply have a drive thats starting to fail? SMART shows following 1. Raw read error rate - 2907048 (hex 2C 5BA8) 5. 0 7. Seek error rate - 669490849 (hex 27E7 9EA1) 187. 0 188. 0 0 0 197. 0 198. 0 199. 0 From what I can gather from this thread, this equals no errors Edit: After another 25GB, it yet again stalls. sigh
  7. Update: Let it run overnight, transfering about 250GB to the cache. Mover was started on schedule, while still copying files from the older server. Noticed when I woke up, all speeds dropped to almost halt. Stopped the transfer from the old server, mover still at very low speeds.
  8. Did the same rclone sync, from mounted smb-share using my nvme as cache, speeds at about 65MB/s, same high CPU usage. Did some 40GB of data (2-5GB files), slowed down somewhat. mover worked as it should. Powered down the server, swapped the sata cable (and SATA-port as its shared with nvme3 - even though I dont have one inserted). Then did rsync over ssh instead, and lo and behold! Speeds stable at 105MB/s, CPU cores are mostly calm and almost idle! Did a 120GB transfer, everything working perfect. Transfer to array getting speeds between 45-50MB/s, don't know whats expected on these drives (preclear started just above 200MB/s, ended at about 70MB/s).
  9. They're slow, but still about 70 times faster than the speeds I'm seeing, on the slower areas of the platter. Rclone and rsync (if you're more familiar with) got about the same performance, I believe. The Intel SSD was fine a few days ago, it's a datacenter drive worth more than the rest of my drives combined, with 4 months of use. Reading more from the thread I skimmed through when i swapped btrfs for xfs, I suppose I could look into my ram settings. I'll do a test setting the nvme (services drive) as cache for the media share as well, though as I'm seeing the same behavior when writing directly to the array, I have my doubts.
  10. Been looking to replace an old 13-year old ubuntu server running about 40 docker-containers, plex being the most resource-hungry. unRaid seemed like the perfect choice, as I'd love to have the possibility to spin up a VM to play some games now and then. Getting the VM working went almost painlessly, passthrough on 1TB nvme-disk, 3060Ti, USB-PCIe-card, and 4 cores (8threads) of an i5-12600K. Set up my array with 10TB parity Ironwolf, 3x8TB Barracuda for storage, 1.2TB Intel S3710 SSD as pool "cache". A separate 1TB nvme as pool "services" for docker and whatever needed for other VMs down the road. Spun up cloudflared, Portainer and Tailscale, everything working perfect. VM stopped, containers running. Started transfering movies off my old server, mouting it as a smb-share, using rclone sync. Started getting btrfs corruption errors, did some googling, reformatted both pool drives to xfs, and tried again. Having transfer speeds of about 35MB/s, both computers wired, but speeds would after some 15-20minutes start to slow down drasticly. I stopped the transfers, restarted, and same thing, slowing down. Figured I'd move the stuff off the cache, so started the mover. Same thing here! Speeds started at 50MB/s on parity + disk1, before slowing down to sub 1MB/s speeds. Stopped the mover manually by doing mover stop, and noticed 2-3cores having high load, usually two at 100% while a thrid bumping up and down. Rebooted the server, everything fine and idling at about nothing. Restarted the mover, same thing! Stopped it again, 2-3cores keep on working, cache disk, parity and disk1 having blips of sub 1MB/s transfers, even though mover is stopped and nothing else is accessing them. About 10-15minutes later, everything suddenly went back to idling. Did an attempt by skipping the cache, moving the files directly to the array (cache:no instead of cache: yes), initial speeds where higher, at about 45MB/s, but dropped alot faster, and most of my cores are at full usage. Seems like something is not working as intended in my config, or I'm doing something wrong, but I have no idea left on where to look. hyper-diagnostics-20221129-1934.zip
×
×
  • Create New...