Jump to content

stev067

Members
  • Posts

    110
  • Joined

  • Last visited

Posts posted by stev067

  1. Hoping someone can take a look at my logs and help me determine what happened and what I should do. Last night, one of my disks got taken offline while I was moving a batch of files from it to my cache drive, and I think it was saying read errors. From what I can tell, the SMART details don't look much different from the other disks, and it says it passes. My array is connected through an HBA card and SAS expander card, so it could possibly have been a hiccup in either of those too. My array is like 6 months old, so I'm hoping this disk isn't actually toast.

    Thanks I appreciate any help.

    haven-diagnostics-20211008-0731.zip

  2. 3 hours ago, Naed said:

    @stev067

     

    Love the look of your build. I've been running an HP N36L build for [checks notes] 10 years now and am very keen to upgrade.

     

    Unfortunately I'm pretty time poor for doing research - any chance you'd be willing to post your build list?

     

    Cheers

     

    I was going to say check out my build in my signature, but I guess signatures are only visible when viewing the forum on desktop. Here's where I'm at currently:

     

    Unraid Pro 6.10.0-rc1

    Array: 10x 14TB Seagate Ironwolf Pro (2 parity)

    Case: Fractal Define 7 XL

    Motherboard: Asus Prime X570 Pro

    CPU: Ryzen 9 5950x

    Memory: Kingston 64GB 3200MHz DDR4 ECC

    Cache: Samsung 980 Pro 2TB NVMe and 980 Pro 500GB NVMe

    HBA Card: LSI 9211-8i (IT mode) (cooled with Noctua 40mm)

    SAS Expander: Intel RES2CV240 (cooled with Noctua 40mm)

    Network Card: Intel X550-T1 10Gbps

    PSU: EVGA 850 GR

    CPU Cooler: Corsair H115i RGB Pro XT

    UPS: APC 650W

    Hotswap Bay: Icy Dock flexiDOCK MB795SP-B

    • Thanks 1
    • Upvote 1
  3. Small update in my build. I upgraded one of my cache drives from 500GB to 2TB. Still Samsung 980 Pro NVMe. This is the cache drive I use for moving files between this machine and my main PC. It was big enough for that, but then I started to use Handbrake to re-encode a large video collection, and using this cache drive as the platform for that.

  4. I'm pulling my hair out over this one. Since updating to the new RC last night, I am no longer able to access my shares from my main PC, which I was previously doing via a secondary, 10Gb NIC on eth1.

    I've tried re-making the user, clearing out all the old shares and their registry entries. Rebooted both machines a dozen times.

    It lets me connect to the share, but when I go to navigate the folders, it just spins and times out.

    When I connect via the 1Gb connection on eth0, all is fine. But I really need that 10Gb speed between these machines.

    Any ideas what I can check? I really didn't change anything else aside from the OS update.

    Thanks.

  5. 1 hour ago, JBa said:

    This is a good build.. Does anyone has a recommendation for having a rack mounted case for this ?

    I'm also interested, because my curiosity may take me to rack-mounting things at some point. You may get more/better answers by making a new thread, because I don't know if people will really see your question here.

  6. Hey guys,

    Today I noticed that my system log is full of the same repeating error(s), going back as far as the full size of the log, which appears to be 3 days. I have not noticed my system behaving badly in any way, but my syslog is useless if I can't shut this error up.

    I have tried researching this error without much luck. Please if anyone can point me in the right direction, I'd really appreciate it.

    I have 2 unassigned drives, both NTFS. One is a USB raid-50 enclosure with 8 disks. I'm able to mount it and RW with it. This error persists even when that drive is powered off and unplugged from the system. My other drive is a HDD passed through to a Windows 10 VM, where I formatted it within the VM. Those are the only 2 NTFS drives in my system. I wish I knew when this started, because that would help me narrow it down, but I have no idea.

    Thanks

     

    Jul 16 13:50:49 Haven ntfs-3g[7775]: ntfs_mst_post_read_fixup_warn: magic: 0x3a00b303 size: 4096 usa_ofs: 13360 usa_count: 13114: Invalid argument
    Jul 16 13:50:49 Haven ntfs-3g[7775]: Actual VCN (0x153024243000e20) of index buffer is different from expected VCN (0x0) in inode 0x3aaee.

     

    haven-diagnostics-20210716-1142.zip

  7. Just now, deanpelton said:

    Fascinating, I hadn't thought about that!

     

    I run an mtax case, so I only have the 1 PCIe slot and I am maxed out between the 8 from the LSI and the 4 on my motherboard, looks like you just gave me a way out.

    Had no idea about these expander cards.

    Yeah sounds like you're only a month or two behind me, as I had no idea either. It's dead simple to set up. One port needs to be plugged into your HBA card (doesn't matter which port), and then all the other ports are free ports for 4x HDDs. So with one HBA and two expanders, you could connect 40 drives to the same PCIe slot.

    Here is the link to the expander card I bought from eBay. Another user gave me a tip that this guy will accept offers of $75, which is much lower than his listed price. It has a legit intel chip on it, though I'm not sure the card is genuine intel or some kind of OEM / grey market.

    I just drilled holes in my case and used brass standoffs to mount it, since I didn't want it rattling around and potentially shorting out.

    • Thanks 1
  8. 7 minutes ago, deanpelton said:

    Great looking setup!

    One question, I have a similar LSI card which i smacked a 40mm Nocuta on like you, wonder how you knew the temp of the card? You mentioned it decreased the temp a lot.

    Love to see if I can measure how well my one is working.

     

    Also wondering why you went with an Expander card rather than just use the ports on your mobo for the rest?

    Hey!

    So I have one of those IR thermal temp guns where you just point it. This one.  I'm not sure if there is an onboard sensor on the LSI cards, but I know that the Dynanix system temperature plugin didn't see one.

    Regarding the SATA ports, I might not have really considered it haha. For me, it was sort of an exercise in learning about HBA's and expanders, wanting to have the IO to expand to another enclosure in the future, if necessary. Also I didn't really like the idea of having HDDs in the same array but on mixed interfaces. Cleanest to keep everything on the same single PCIe slot.

  9. 1 hour ago, orlando500 said:

    Hi, where did you use the Icy Dock flexiDOCK MB795SP-B ? i have a new define 7 xl that im going to start building a new server in. But was thinking of going only with standard drive trays. Didnt see the ice in the picture i think 🙂

     

     

    Hey yeah I didn't get a pic of the dock, but it's in the front top slot. This blocks the top 2 HDD tray slots, which I don't need yet. I haven't been able to get the dock to work with hot plugging. A drive is only recognized if it was detected on boot, which kind of makes it pointless to me. Maybe it can be fixed but I haven't needed it yet. Probably in the long term, I will end up taking this out, so I can add 2 more drives to the top trays.

  10. Just wanted to report back. I tried playing with firewall rules, and got frustrated with it. I plugged in an old laptop to one of the VLAN ports, and verified that even when I set the IP to the same subnet as the rest of my network, it does not see anything else there, so it is truly limited to what's plugged into the VLAN. That was really my main concern, and I decided it wasn't worth the time I was spending to try and limit port IO. If someone were to plug into the ethernet cable powering one of my cameras, all they would maybe be able to attack is that VM and nothing else.

  11. Just to follow up on my issue, I tried changing the "niceness" to give this docker highest priority and that didn't make a difference. I ended up dedicating fewer cores, and now those remaining cores are going in the 90%+ range. So I guess I have more CPU than handbrake knows what to do with?

  12. 16 hours ago, C_James said:

    running handbrake now so here a screenshot.

    image.png

    Huh thanks. Is there anyone with a ryzen cpu who can confirm I should be getting higher CPU utilization than 60-80% per core?

    Edit: Doesn't seem to have anything to do with temps or my cooler. When I run cinebench, it pegs all my cores at 100% and temps hold around 84c. It shouldn't have anything to do with storage speeds either, since Handbrake is reading from and writing to an nvme drive.

  13. 1 minute ago, C_James said:

    im saying usage. that my bad. was typing the reply while in and out of a siege game. and okay. 81c does seem a little high for not all of the cores. i would check in case temps or reapply the thermal paste. 90 c is where ryzen thermal slows down. 

    So but when you run the handbrake docker, what do the core utilizations look like?

  14. 2 minutes ago, C_James said:

    its normal. i have a 10 core xeon with all 10 cores being used on handbrake. same high 80 to 90% temps. you are close to thermal limit for the chip. i hope the cooler is for 105w or more cpus.

     

    Are you talking 80-90% cpu usage, or 80-90C temps? I'm more wondering if it's normal that the cores aren't maxing out their potential. My cooler is a corsair h115i people say it's good for this CPU.

  15. Hey guys,

    Firstly thank you for creating and maintaining this docker. I upgraded my CPU to a ryzen 5950x to speed up a tens-of-terabytes size reduction project using handbrake. I'm pinning 14 of the 16 cores to this container, to leave the remaining 2 for my blueiris VM. What I'm finding, is that the cores all hover around 60-85% and rarely go into the 90%+ range, and really never peg 100%. I have an AIO on my CPU, and I am seeing the temp usually around 81C.

    I'm really just wondering if what I'm seeing is normal. Should I be looking into unlocking the unused processing power, or is this typical behavior?

    Thanks.

  16. 1 minute ago, Vr2Io said:

    As said before,I seldom use software firewall, you need try how to block all and only allow one port pass on br0.101, this wouldn't log you out if any wrong due to management by br0. I think you should disable all default rules, no need delete it.

    Alright thanks for all your help. I will research windows firewall rules.

  17. 8 hours ago, Vr2Io said:

    Yes

     

    In fact no recommend could provide, best should be set as tight as possible, i.e. only allow specific TCP / UDP : port passthrough to br0.101. ( So VM have greatest protection from cameras port )

    You could got those info. by cameras communication protocol info. or install Wireshark to capture br0.101 traffic.

     

    I use Unifi CAM, so I will ref. below info.

    https://help.ui.com/hc/en-us/articles/217875218-UniFi-Video-Ports-Used

     

    Ok I used wireshark, and it looks like the only communication on br0.101 is between port 1935 of the camera IP, to another single port of the VM IP (both directions). Can I create one rule that allows communication between these 2 ports while blocking everything else on this adapter? Do I have to delete/change any of the existing rules? Maybe you have an example what this looks like in the firewall rules.

  18. 6 hours ago, Vr2Io said:

    I would suggest keep your previous setting, carmars in VLAN br0.101 than add br0 and br0.101 to blue iris VM.

     

    - br0 for VM management, file sharing ...

    - br0.101 for carmers in different IP ssubnet.

    - Apply firewall rule in br0.101 or both.

     

    This still better then all stuff in same flat network.

    Oh I didn't realize I can add a second virtio adapter on the same VM. This sounds like the way to go. When you say apply firewall rules, you mean within the windows vm? What kind of rules would you suggest?

     

    Edit: Ok I got that setup working. I'm really pleased with this. It's basically what I was trying to set up. Thanks both of you for your help. I'm still curious what you recommend for firewall settings, because I don't know what an attacker would be able to do, given this setup.

  19. 10 hours ago, ken-ji said:

    Ok, we need to make some stuff clear first.
    @stev067 you just replaced the switch connecting your Unraid server with the VLAN enabled switch, (POE not important here)

    so I assume your network looks like

    image.png.0548a477011e93b2f0cf711152084b61.png

     

    And I think your config looks good - save for using VLAN 102 as the general purpose (stick to VLAN1 unless your router and stuff need this level of control)
    so your goal is to limit access between the cameras and blue iris PC vs unraid and and the rest of the network.

     

    Judging from your setup, you already cannot reach the cameras and the VM from the rest of the network.

    and if the BR0.101 interface does not have an IP address, Unraid will also be unable to reach the VM.

     

    You seem to want to access the VM also from the rest of the network and the easiest way (not the securest) is to simply add another NIC to the VM and connect it to BR0

     

    Alternatively if you have a router with VLAN support is simply to put the camera in their own VLAN, then program the router to deny access from the cameras to your network, and allow the VM  to connect to the cameras

     

     

    Thanks for taking a look. This networking stuff is all wizardry to me. As for the image, there is another switch between the router and this new switch, but I don't have it set up to do anything special. So the image is accurate besides that. 

    This whole VLAN thing has been very frustrating to set up, and I've just disabled it for now. I'm having a hard time understanding the whole Tag vs Untag, vs non-member thing, and PVID settings. I'd rather not replace my router just for this, or add a second NIC. Is there any way I can manage the traffic from within the VM itself? Otherwise, is there some medium level of safety I can accomplish with what I have, while still being able to network with blue iris?

×
×
  • Create New...