Jump to content

stev067

Members
  • Posts

    110
  • Joined

  • Last visited

Everything posted by stev067

  1. Hoping someone can take a look at my logs and help me determine what happened and what I should do. Last night, one of my disks got taken offline while I was moving a batch of files from it to my cache drive, and I think it was saying read errors. From what I can tell, the SMART details don't look much different from the other disks, and it says it passes. My array is connected through an HBA card and SAS expander card, so it could possibly have been a hiccup in either of those too. My array is like 6 months old, so I'm hoping this disk isn't actually toast. Thanks I appreciate any help. haven-diagnostics-20211008-0731.zip
  2. I was going to say check out my build in my signature, but I guess signatures are only visible when viewing the forum on desktop. Here's where I'm at currently: Unraid Pro 6.10.0-rc1 Array: 10x 14TB Seagate Ironwolf Pro (2 parity) Case: Fractal Define 7 XL Motherboard: Asus Prime X570 Pro CPU: Ryzen 9 5950x Memory: Kingston 64GB 3200MHz DDR4 ECC Cache: Samsung 980 Pro 2TB NVMe and 980 Pro 500GB NVMe HBA Card: LSI 9211-8i (IT mode) (cooled with Noctua 40mm) SAS Expander: Intel RES2CV240 (cooled with Noctua 40mm) Network Card: Intel X550-T1 10Gbps PSU: EVGA 850 GR CPU Cooler: Corsair H115i RGB Pro XT UPS: APC 650W Hotswap Bay: Icy Dock flexiDOCK MB795SP-B
  3. Did anyone have to make any changes to their secondary / 10Gb NIC? On this update, I'm suddenly unable to interact with SMB shares over my 10Gb NIC, p2p from my W10 pc. I can ping back and forth using those IP's, and I can still connect over the br0 1Gb connection, just not my fast one. It will prompt me for credentials, and appears to connect, but then explorer will just sit/timeout when I start navigating my shares. I've rolled back for now.
  4. TY let me know if you need any standoffs because I have way more than I could ever use.
  5. I drilled 3 holes in the floor and used brass standoffs to mount it. I'm pretty happy with how it turned out.
  6. Small update in my build. I upgraded one of my cache drives from 500GB to 2TB. Still Samsung 980 Pro NVMe. This is the cache drive I use for moving files between this machine and my main PC. It was big enough for that, but then I started to use Handbrake to re-encode a large video collection, and using this cache drive as the platform for that.
  7. https://www.amazon.com/gp/product/B07DXRNYNX/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
  8. I'm pulling my hair out over this one. Since updating to the new RC last night, I am no longer able to access my shares from my main PC, which I was previously doing via a secondary, 10Gb NIC on eth1. I've tried re-making the user, clearing out all the old shares and their registry entries. Rebooted both machines a dozen times. It lets me connect to the share, but when I go to navigate the folders, it just spins and times out. When I connect via the 1Gb connection on eth0, all is fine. But I really need that 10Gb speed between these machines. Any ideas what I can check? I really didn't change anything else aside from the OS update. Thanks.
  9. I'm also interested, because my curiosity may take me to rack-mounting things at some point. You may get more/better answers by making a new thread, because I don't know if people will really see your question here.
  10. I went ahead and rebooted the system, with the raid enclosure turned off and unplugged from the system, and now the errors have stopped, which leads me to believe it could be the culprit. I'm going to pull each disk and do a chkdsk on my windows PC.
  11. Did you ever figure this out? I'm having the same problem.
  12. @Symon I saw you posted the same issue in Jan-2019, but I didn't see a resolution from you. The other user on that thread mentioned not mounting a drive to be passed through, but my passed through drive is not mounted.
  13. Hey guys, Today I noticed that my system log is full of the same repeating error(s), going back as far as the full size of the log, which appears to be 3 days. I have not noticed my system behaving badly in any way, but my syslog is useless if I can't shut this error up. I have tried researching this error without much luck. Please if anyone can point me in the right direction, I'd really appreciate it. I have 2 unassigned drives, both NTFS. One is a USB raid-50 enclosure with 8 disks. I'm able to mount it and RW with it. This error persists even when that drive is powered off and unplugged from the system. My other drive is a HDD passed through to a Windows 10 VM, where I formatted it within the VM. Those are the only 2 NTFS drives in my system. I wish I knew when this started, because that would help me narrow it down, but I have no idea. Thanks Jul 16 13:50:49 Haven ntfs-3g[7775]: ntfs_mst_post_read_fixup_warn: magic: 0x3a00b303 size: 4096 usa_ofs: 13360 usa_count: 13114: Invalid argument Jul 16 13:50:49 Haven ntfs-3g[7775]: Actual VCN (0x153024243000e20) of index buffer is different from expected VCN (0x0) in inode 0x3aaee. haven-diagnostics-20210716-1142.zip
  14. Yeah sounds like you're only a month or two behind me, as I had no idea either. It's dead simple to set up. One port needs to be plugged into your HBA card (doesn't matter which port), and then all the other ports are free ports for 4x HDDs. So with one HBA and two expanders, you could connect 40 drives to the same PCIe slot. Here is the link to the expander card I bought from eBay. Another user gave me a tip that this guy will accept offers of $75, which is much lower than his listed price. It has a legit intel chip on it, though I'm not sure the card is genuine intel or some kind of OEM / grey market. I just drilled holes in my case and used brass standoffs to mount it, since I didn't want it rattling around and potentially shorting out.
  15. Hey! So I have one of those IR thermal temp guns where you just point it. This one. I'm not sure if there is an onboard sensor on the LSI cards, but I know that the Dynanix system temperature plugin didn't see one. Regarding the SATA ports, I might not have really considered it haha. For me, it was sort of an exercise in learning about HBA's and expanders, wanting to have the IO to expand to another enclosure in the future, if necessary. Also I didn't really like the idea of having HDDs in the same array but on mixed interfaces. Cleanest to keep everything on the same single PCIe slot.
  16. Hey yeah I didn't get a pic of the dock, but it's in the front top slot. This blocks the top 2 HDD tray slots, which I don't need yet. I haven't been able to get the dock to work with hot plugging. A drive is only recognized if it was detected on boot, which kind of makes it pointless to me. Maybe it can be fixed but I haven't needed it yet. Probably in the long term, I will end up taking this out, so I can add 2 more drives to the top trays.
  17. Just wanted to report back. I tried playing with firewall rules, and got frustrated with it. I plugged in an old laptop to one of the VLAN ports, and verified that even when I set the IP to the same subnet as the rest of my network, it does not see anything else there, so it is truly limited to what's plugged into the VLAN. That was really my main concern, and I decided it wasn't worth the time I was spending to try and limit port IO. If someone were to plug into the ethernet cable powering one of my cameras, all they would maybe be able to attack is that VM and nothing else.
  18. Just to follow up on my issue, I tried changing the "niceness" to give this docker highest priority and that didn't make a difference. I ended up dedicating fewer cores, and now those remaining cores are going in the 90%+ range. So I guess I have more CPU than handbrake knows what to do with?
  19. Huh thanks. Is there anyone with a ryzen cpu who can confirm I should be getting higher CPU utilization than 60-80% per core? Edit: Doesn't seem to have anything to do with temps or my cooler. When I run cinebench, it pegs all my cores at 100% and temps hold around 84c. It shouldn't have anything to do with storage speeds either, since Handbrake is reading from and writing to an nvme drive.
  20. So but when you run the handbrake docker, what do the core utilizations look like?
  21. Are you talking 80-90% cpu usage, or 80-90C temps? I'm more wondering if it's normal that the cores aren't maxing out their potential. My cooler is a corsair h115i people say it's good for this CPU.
  22. Hey guys, Firstly thank you for creating and maintaining this docker. I upgraded my CPU to a ryzen 5950x to speed up a tens-of-terabytes size reduction project using handbrake. I'm pinning 14 of the 16 cores to this container, to leave the remaining 2 for my blueiris VM. What I'm finding, is that the cores all hover around 60-85% and rarely go into the 90%+ range, and really never peg 100%. I have an AIO on my CPU, and I am seeing the temp usually around 81C. I'm really just wondering if what I'm seeing is normal. Should I be looking into unlocking the unused processing power, or is this typical behavior? Thanks.
  23. Alright thanks for all your help. I will research windows firewall rules.
  24. Ok I used wireshark, and it looks like the only communication on br0.101 is between port 1935 of the camera IP, to another single port of the VM IP (both directions). Can I create one rule that allows communication between these 2 ports while blocking everything else on this adapter? Do I have to delete/change any of the existing rules? Maybe you have an example what this looks like in the firewall rules.
×
×
  • Create New...