dis3as3d

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by dis3as3d

  1. I have a Windows VM running that's showing 80% memory utilization of 16 gigs or memory. Without thinking I bumped it up to 24 gigs of memory. After the change, I was still at 80% utilization, and that got me wondering what was happening. Looking into it the VM is using WELL BELOW what it's reporting. OS is Windows Server 2019. Is anyone else experiencing similar issues? What should I investigate to help get memory usage under control?
  2. I had a VM up and running. After a reboot, the VM didn't come back online. I VNC'ed into the VM to see why it wasn't coming back and it looks like it can't find the OS. How do I get my old VM back?
  3. Seems like plenty of folks have Samsung drives working ok. I'm wondering if it's something specific to the 970 EVO 1TB. Seems like Unraid doesn't really register it all correctly. For example: it has two temp sensors, but Unraid seems to monitor the lower temp of the two. I'd expect you'd want to monitor the hotter of the two. # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value - Critical warning 0x00 - Temperature 42 Celsius - Available spare 100% - Available spare threshold 10% - Percentage used 0% - Data units read 1,122,860 [574 GB] - Data units written 1,181 [604 MB] - Host read commands 1,741,759 - Host write commands 8,081 - Controller busy time 3 - Power cycles 44 - Power on hours 4 (4h) - Unsafe shutdowns 31 - Media and data integrity errors 0 - Error information log entries 0 - Warning comp. temperature time 0 - Critical comp. temperature time 0 - Temperature sensor 1 42 Celsius - Temperature sensor 2 51 Celsius
  4. @0xPCP Did reformatting into XFS(Assuming you meant that) fix your issue? I'm still hitting the same issue both a single drive formatted as XFS as a cache drive and as an unassigned drive mounted outside any Arrays/Pools/Cache drives.
  5. One other thing I tried is updating to the unstable version of Unraid and still had issues.
  6. Well, that didn't work. Back to square 1. Tried both of the below and the drive still crashes. Anyone got any ideas? nvme_core.default_ps_max_latency_us=0 nvme_core.default_ps_max_latency_us=5500
  7. Is it space delineated? Just space and then nvme_core...?
  8. New Theory: NVME drives go into a lower power standby mode. Samsung drives in particular seem to give Linux problems. While in standby you can still do low I/O transfers, and that aligns with my issue where I can see the drive and even write to it some, but larger I/O transfers give me problems. The recommended fix is to add this to the syslinux.cfg: nvme_core.default_ps_max_latency_us=5500 Being new to Linux I tried and couldn't get it working. Below is my edited syslinux.cfg. What did I do wrong? default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label Unraid OS menu default kernel /bzimage append initrd=/bzroot append nvme_core.default_ps_max_latency_us=5500 label Unraid OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest
  9. Oh, looks like NVME is controlled by the southbridge and not a seperate chip. I definately turned the IR gun on the southbridge. Don't think that's the issue.
  10. The drives themselves aren't overheating, I'll have to look up where the controller is on the board and give that a test.
  11. Yes, the Mobo has 3 M.2 slots directly on the board. Drives are in slot 0 and 1 at the moment. Other HDD are running off a LSI 9211 since NVME shares a bus with the SATA ports. I haven't smelled any burning, and even ran an IR gun over the board and drives looking for hot spots and didn't find anything. I think I'd have to be very specific in where I'm reading the temp, so not sure if I would've caught the controller overheating. *Edit - The drives crash in 2-5 min of starting a heavy I/O operation as well. I'd expect it any overheating to take longer than that.
  12. I flashed the BIOS to the latest version last night as well, no dice. Agreed it could be a MOBO/Controller issue but the fact it only happens under heavy I/O feels more like software. There's also this long thread dating back to 2017(Seriously why is a bug this old still open?!) about issues with some Samsung NVME drives and Unraid. Seems strangely similar, and the thread goes on to talk about sector sizes on some Samsung NVME drives causing issues. I'm new to Linux so I've got no clue where to start troubleshooting this. Might just give up on Unraid and run Windows.
  13. The strange thing is it happens to either of the two NVME drives independently. I've tested both drives independently and they both fall offline whenever I initiate a large data transfer or I/O heavy operation. I may be holding out hope, but I'm wondering if this could be a drivers issue.
  14. Update: I've tried taking both NVME drives out of the cache pool and clearing them - Crashed Unraid I've tried taking both NVME drives out of the cache pool and checking them - Crashed Unraid I've tried breaking the pool, reformatting a single NVME drive as XFS and re-adding as a cache drive - Crashed Unraid
  15. I had the same idea and it's been nothing but trouble. Check out this thread for more detail:
  16. +1 on this, I'm hitting the same issue with 2x Samsung 970s in a btrfs cache pool as well. Has anyone found any fixes?
  17. New system build and new to Unraid, so this could be something simple. I've been having a weird issue where whenever I enable the use of cache disks on a share it causes the cache drives to error out and force a reboot. Cache drives are 2x Samsung M.2 NVME drives set up in drive pool for redundancy. Whenever I start a file transfer using Krusader to transfer files from a network NAS, it will 100% error out and require a reboot if the share is using the cache drives. I've tested by disabling cache drive usage for the share and get no errors. Diagnostic logs attached, snapshot taken from non-error state syslog attached from error state Any help or insight would be appreciated because I've spent hours trying to isolate this issue and it's driving me nuts. empunraid-diagnostics-20191110-0800.zip empunraid-syslog-20191110-0700.zip
  18. @Squid - You got it right. They're all identical make/model disks but one was reading a few bytes lower. Got it fixed, thanks for your help!
  19. I'm setting up Unraid for the first time on mostly new hardware. I decided against buying the larger array+parity drives I planned on purchasing because I won't need that capacity until I get BlueIris up and running. For the time being I have 2x 1TB NVME drives I planned to use as mirrored cache drives, and 4x 640gig array drives + 1 640 gig parity drive. The issue I'm having is I can't start the array because the parity drives are 640 gig and the cache drives are 1tb. My understanding was the parity drive only backed up the drives in the array, and not the cache drives. Cache drives would offload to the array at some interval. Is my understanding correct, and is there a supported solution?