Jump to content

dis3as3d

Members
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

0 Neutral

About dis3as3d

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have a Windows VM running that's showing 80% memory utilization of 16 gigs or memory. Without thinking I bumped it up to 24 gigs of memory. After the change, I was still at 80% utilization, and that got me wondering what was happening. Looking into it the VM is using WELL BELOW what it's reporting. OS is Windows Server 2019. Is anyone else experiencing similar issues? What should I investigate to help get memory usage under control?
  2. I had a VM up and running. After a reboot, the VM didn't come back online. I VNC'ed into the VM to see why it wasn't coming back and it looks like it can't find the OS. How do I get my old VM back?
  3. Seems like plenty of folks have Samsung drives working ok. I'm wondering if it's something specific to the 970 EVO 1TB. Seems like Unraid doesn't really register it all correctly. For example: it has two temp sensors, but Unraid seems to monitor the lower temp of the two. I'd expect you'd want to monitor the hotter of the two. # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value - Critical warning 0x00 - Temperature 42 Celsius - Available spare 100% - Available spare threshold 10% - Percentage used 0% - Data units read 1,122,860 [574 GB] - Data units written 1,181 [604 MB] - Host read commands 1,741,759 - Host write commands 8,081 - Controller busy time 3 - Power cycles 44 - Power on hours 4 (4h) - Unsafe shutdowns 31 - Media and data integrity errors 0 - Error information log entries 0 - Warning comp. temperature time 0 - Critical comp. temperature time 0 - Temperature sensor 1 42 Celsius - Temperature sensor 2 51 Celsius
  4. @0xPCP Did reformatting into XFS(Assuming you meant that) fix your issue? I'm still hitting the same issue both a single drive formatted as XFS as a cache drive and as an unassigned drive mounted outside any Arrays/Pools/Cache drives.
  5. One other thing I tried is updating to the unstable version of Unraid and still had issues.
  6. Well, that didn't work. Back to square 1. Tried both of the below and the drive still crashes. Anyone got any ideas? nvme_core.default_ps_max_latency_us=0 nvme_core.default_ps_max_latency_us=5500
  7. Is it space delineated? Just space and then nvme_core...?
  8. New Theory: NVME drives go into a lower power standby mode. Samsung drives in particular seem to give Linux problems. While in standby you can still do low I/O transfers, and that aligns with my issue where I can see the drive and even write to it some, but larger I/O transfers give me problems. The recommended fix is to add this to the syslinux.cfg: nvme_core.default_ps_max_latency_us=5500 Being new to Linux I tried and couldn't get it working. Below is my edited syslinux.cfg. What did I do wrong? default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label Unraid OS menu default kernel /bzimage append initrd=/bzroot append nvme_core.default_ps_max_latency_us=5500 label Unraid OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest
  9. Oh, looks like NVME is controlled by the southbridge and not a seperate chip. I definately turned the IR gun on the southbridge. Don't think that's the issue.
  10. The drives themselves aren't overheating, I'll have to look up where the controller is on the board and give that a test.
  11. Yes, the Mobo has 3 M.2 slots directly on the board. Drives are in slot 0 and 1 at the moment. Other HDD are running off a LSI 9211 since NVME shares a bus with the SATA ports. I haven't smelled any burning, and even ran an IR gun over the board and drives looking for hot spots and didn't find anything. I think I'd have to be very specific in where I'm reading the temp, so not sure if I would've caught the controller overheating. *Edit - The drives crash in 2-5 min of starting a heavy I/O operation as well. I'd expect it any overheating to take longer than that.
  12. I flashed the BIOS to the latest version last night as well, no dice. Agreed it could be a MOBO/Controller issue but the fact it only happens under heavy I/O feels more like software. There's also this long thread dating back to 2017(Seriously why is a bug this old still open?!) about issues with some Samsung NVME drives and Unraid. Seems strangely similar, and the thread goes on to talk about sector sizes on some Samsung NVME drives causing issues. I'm new to Linux so I've got no clue where to start troubleshooting this. Might just give up on Unraid and run Windows.
  13. The strange thing is it happens to either of the two NVME drives independently. I've tested both drives independently and they both fall offline whenever I initiate a large data transfer or I/O heavy operation. I may be holding out hope, but I'm wondering if this could be a drivers issue.
  14. Update: I've tried taking both NVME drives out of the cache pool and clearing them - Crashed Unraid I've tried taking both NVME drives out of the cache pool and checking them - Crashed Unraid I've tried breaking the pool, reformatting a single NVME drive as XFS and re-adding as a cache drive - Crashed Unraid