fletchto99

Members
  • Posts

    9
  • Joined

  • Last visited

fletchto99's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I upgraded my drive size for the first data disk in my array (i have 2 parity disks) and it says the device is being "emulated" currently. However the disk `/mnt/disk1` is actually empty and any files that were on that disk are missing. Are the files lost for good or is there something I can do to recover them? The data rebuild is currently 40% of the way done. I've still got the old disk - so worst case I wonder if I can mount it with unassigned devices and copy over the files? The filesystem is XFS encrypted though.
  2. The go file is a file which is run when the server is booted. It exists at `/boot/config/go`
  3. This will make for some great CTF challenges
  4. Hey! I'm going to be moving my current setup from a desktop tower to a rackmount server. Part of the switch involves me going from using SATA ports to using a Mini-SAS backplane (still the same SATA drives). I'm hoping to just drop the drives in and not really need to re-configure much. For refrence I'll be getting the Norco 4224 with the LSI 9305-24i Will the drives be detected based on their serial numbers or is there some preparation I should be doing in advance?
  5. I rebooted into my bios to see if the drives were detected there. Thankfully, they were. After that I rebooted back into the OS. Started the array in maintaincence mode and all of my disks were detected properly. The parity disk was showing that it was disabled so I followed the steps at https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive to re-enable & re-build the parity. Currently the re-build is occurring but it looks like all of my data is still intact. Is there anything else I should be doing in the mean time? Perhaps this was related to the onboard controller of the motherboard. I've exported the diagnostics file (after rebooting) though I suspect they are useless now. I should have thought of that before.. Since things are pointing to my motherboard's controller do you have any suggestions of test/diagnostics to run on it to ensure it was just a blip and not an ongoing error?
  6. I woke up this morning to my log directory full and hundreds of read/write errors across all of my disks. I immediately stopped all of my docker containers. What would be my next best course of action? I stopped the array and restarted it and now I'm seeing all of the disks are unmountable: There were no power outages or brownouts as far as I'm aware of (plus I've got the server on a proper UPS which should handle that). I'm thinking all hope is lost here but if anyone has any ideas I'm all ears. The disks are connected to the onboard controller. Some more info: - Unraid 6.7.2 (nvidia) - All WD 8TB RED NAS drives (less than 6 months old) - Ryzen 2700x - ASUS ROG CROSSHAIR VII HERO
  7. When downloading via NZBGet the system becomes nearly unusable. The specs are: - Ryzen 2700x - 32 GB DDR4 Corsair LP - Crosshair VII Hero The array is xfs (encrypted): - Parity: 8TB WD RED NAS - Disk 1: 4TB Seagate - Disk 2: 4TB Seagate - Disk 3: 8TB WD RED NAS The cache is: - Kingston SSDNow V3 120GB (Will be replacing with 1TB Samsung 970 EVO NVME in 2 days) All downloads from NZBGet go directly to the SSD cache (cache option for the share is set to "yes"). I've also attempted to pin NZBGet to the last 4 core & threads with no luck. Netdata is reporting extremely high I/O wait times as seen below: htop is reporting extremely low CPU usage but the dashboard is reporting 100% usage: I've got TRIM (dynamix plugin) scheduled to run hourly. A test using "DiskSpeed" reported that the cache drive should be operating normally: I'm not running any VMs - just docker containers. Would anyone have any suggestions where to begin to troubleshoot this issue? Please let me know if more information is required. I do believe that this hardware should be more than capable?