O_M_R

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

O_M_R's Achievements

Noob

Noob (1/14)

0

Reputation

  1. First of all, thank-you. I've been using the EAP controller, and I haven't really had any issues with it. Just wondering if there's a plan to move to version 4? Looks like it's a big migration. Thanks again for all of your contributions.
  2. Just wanted to drop you a quick thank you, I'll turn off the GPU statistics Plugin for now. I was wondering if that was it, I just haven't had time to look at it yet.
  3. Since updating I see this over and over until it fills my log: Mar 13 16:28:03 NAS kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Mar 13 16:28:06 NAS kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000dffff window] Anyone else seeing this? I'm just using an old 750 ti.
  4. Went down this rabbit hole a bit. Not sure if this will help you. On mine, I have a 10GB as my "main" link eth0. Then eth3 is a 1GB nic. I attached the 1GB nic to an extra port on my router, and made a new interface and allowed the two subnets to talk to one another, and I plan on separating where each subnets WAN traffic goes. But, a by-product is I can use either br0 or br3 for dockers for them to go to either nic. Either that, or just bond all your available bandwidth and call it a day.
  5. I love being able to add one drive at a time, and dockers! Integrated GPU support would be nice.
  6. It's just strange that with an older 120GB OCZ I never had a single problem. There's really only 3 things I can think of at this point, aside from building a new server. 1. The firmware on the Samsungs (or the drive controller) hates old hardware, and is causing the issue through some form of incompatibility. 2. There actually isn't any corruption at all, but the Samsung drive is doing something behind the scenes that makes btrfs angry (which may be due to older hardware). 3. NZBGet is allocating space in a manner that btrfs doesn't like. I find it odd the only thing corrupting is NZBGet downloads, it's strange to say the least. I know the 850 firmware is up to date, but not sure about the 860 since I just ordered it and put it in. Only thing I've got left to try is updating the firmware. I've even tried changing sata cables! To be honest, more than anything it just stinks to try to improve the server a bit, and run into a headache like this (to be clear I'm not blaming unRAID at all!). Usenet downloads are one thing, but I have some more important things on there I'd prefer not to have corrupted.
  7. Done! Hopefully I've tossed up the right stuff. I also wanted to say thanks, I've been reading the forum a lot as there's lots of great info here for trouble shooting.
  8. Hi all, I'm at a complete loss for this one, and it centers around these kinds of errors: Dec 9 03:44:30 NAS kernel: BTRFS warning (device sdk1): csum failed root 5 ino 453231 off 22727274496 csum 0x70dd045b expected csum 0xe0efe733 mirror 1 I'm hoping someone can help me out, since I've exhausted literally everything I can think of. I used to have an old 120GB SSD as my cache, I got zero errors, everything was happy all using BTRFS. I am using older hardware, an Asus Sabertooth X58 w/ an i7 950, 24GB of RAM. I'm also running the latest (stable) release of unRAID. I got a Samsung EVO 850, 1TB and replaced my cache drive with that. It went fine, but then I started noticing these errors in the log. I noticed all of them were files being downloaded by NZBGet, and I only noticed because I didn't understand why there was still data sitting on the cache drive that should have been moved over. To be clear, I've *never* seen this error for my various docker containers etc, just NZBGet created files (so far). So next course of action, I grab a Samsung EVO 860 1TB, and put the pair in a BTRFS cache pool, thinking perhaps the first SSD is faulty. I continued to get errors... in the exact same spot on every file on both disks. Weird. I then tried testing my RAM, since it's older corsair and not ECC or anything. I let memtest run for 2 passes (around 13 hours) before I had to get things up and running again. No errors. So next, I have 2 controllers on my motherboard. 2 SATA3 ports, which use a Marvell Controller, which I read could be problematic, and some older Intel SATA2 ports. I tried switching to the SATA2 ports, and the error persisted. Finally, I disconnected the EVO 850, and ran just the EVO 860, reformatted it and restored my data. Still more errors. I believe it's happening mostly on files that NZBGet has repaired, like the checksum metadata isn't being updated or something after the repair but this is a random guess. I'm ready to throw in the towel on this one, as I've tried everything I can think of short of building a new server which is on the radar but just not right now. I'd really like to keep both drives together as a cache, as I appreciate the redundancy but I'm getting close to throwing in the towel on this one. I'm trying to figure out if there's some obscure setting I need to change in NZBGet and all will be right in the world. EDIT: Diagnostics Added nas-diagnostics-20191209-2151.zip