manofoz

Members
  • Posts

    86
  • Joined

  • Last visited

Everything posted by manofoz

  1. I made the switch yesterday! I was quite nervous because my server was very stable and gets a lot of use but thankfully it was smooth sailing. All I needed was one LSI Broadcom SAS 9300-8i and a low profile CPU cooler. My temps are great, CPU idling at 35 right now and no disk is any higher. On full speed the fans that come with this beast are turbines but I had been using the "Fan Auto Control" plug-in and after configuring that for the new fans it quieted down quite a bit. My server is in utility space so some noise doesn't hurt anyone but if it was in living space new fans would be needed. I also happened to have enough four pin fan cable splitters & extenders on hand to get them all plugged in (7 chassis fans + CPU fan was more than my motherboard could handle). I was able to build out my temporary rack a bit more and it doesn't seem overly strained. I'm not using the rails yet as it's not sturdy enough for those but I have them ready for when I move. For now it's sitting on a diesel shelf. Will need some blank panels to hid the wires but it took me all day until ~2AM yesterday to get this far...
  2. The DIY JBOD's I've read about look to use a cheap motherboard to plug in an SAS expander so my guess would be powering on via a switch wired to the motherboard. I was thinking about going that route instead of getting one of these but I went with the 36 bay chassis. Since unRAID's array doesn't go over 30 drives I'm not sure I'd go the route of attaching more disks vs. making a standalone NAS if I needed more than that. I'm using 20TB drives so 28 wouldn't be bad assuming the 2 parity drives count against the 30.
  3. @murkus I'm hesitant to try this - the server is heavily used and I'm not sure how IPv4 custom network on interface bond0: gets changed to eth0. I totally screwed things up last time I fiddled with the network settings (Added SFP+ NIC) and ended up with a monitor and KB&M on the floor sorting it out lol.
  4. Thanks! I'll try out those settings you recommended. I am currently using: Docker Settings: Docker custom network type: macvlan Host access to custom networks: Disabled IPv4 custom network on interface bond0: Subnet: 192.168.0.0/24 Gateway: 192.168.0.1 DHCP pool: not set Network Settings: Enable bridging: No Not sure if my IPv4 custom network setting with change to eth0 at some point but it's bond0 now. Other than that it just seems I need to enable bridging and switch to ipvlan. However, I'm still unclear as to what the duplicate IP is trying to host as I can access all of my dockers and VMs without issue. It pops up and goes away after like 4 minutes too which is weird.
  5. I suspect this happened with my old router and it just didn't care. Ubiquiti is having a fit about it which makes it very noticeable but I have not noticed any ill effects. All my VMs have their own IPs and can be accessed. All docker containers can be accessed from the IP of the server with the port configured in the template. No idea why vhost0 is presenting itself as a different network adapter with the same IP.
  6. Hello, Just switched to a Dream Machine SE and I am getting errors about a duplicate IP. It's two mac addresses both owned by the unRAID server. The stuff I've found online mostly points to disabling "Host Access to Custom Networks" for docker which was already disabled: With ifconfig I see bond0 and eth0 assigned to the mac address of the servers network card: bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 192.168.0.200 netmask 255.255.255.0 broadcast 0.0.0.0 ether e4:1d:2d:24:39:c0 txqueuelen 1000 (Ethernet) RX packets 1248780799 bytes 1808446219605 (1.6 TiB) RX errors 0 dropped 22179 overruns 454 frame 0 TX packets 205059076 bytes 143180099899 (133.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether e4:1d:2d:24:39:c0 txqueuelen 1000 (Ethernet) RX packets 1248780799 bytes 1808446219605 (1.6 TiB) RX errors 0 dropped 454 overruns 454 frame 0 TX packets 205059076 bytes 143180099899 (133.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 However, for the mystery mac address I see vhost0. vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 02:fb:51:73:ed:04 txqueuelen 500 (Ethernet) RX packets 8869581 bytes 29481304941 (27.4 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3974 bytes 166908 (162.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I do have a Windows 11 and Howe Assistant VM running but both have their own IPs. What is weird is that this duplicate IP w/ the same mac does not appear for long. It goes away for a while and then comes back seemingly randomly.
  7. It may be running up against the limit, it was dirt cheap and says 500 lbs "max load bearing" on eBay. Not sure what it is when it's on wheels. Not really sure how to weigh the servers but I'd say the current build is around 100 lbs. Didn't think the 4U would add too much more weight until I added more drives and I'd be putting it at the bottom. Other than that I was going to put a dream machine special edition and a NVR pro on it. For the new house I was going to get something like this: I think I can have it delivered to the garage and then have the movers put it at the termination point of our ethernet cable drops. After assembling the little one I don't really want to assemble a 300lb one...
  8. Thanks! This was great information, I think I have a plan on what exactly to do. I'll be moving in 9+ months when construction is done and plan to have a 42U rack at the new place. Starting with some networking equipment tomorrow I'll be provisioning what I can before I move. Will totally grab one of these, my Define 7 XL doesn't fit great in the small rack I grabbed to stage everything...
  9. Hey, this is a great recommendation. I'd love to move to one of these as I am now running 19 HDDs in a fractal define 7 XL and I can't shove anymore in that thing. I have a few questions holding me back as I don't quite understand what else I'd need to switch over: I have two LSI 9207-8i HBA's right now, would those reusable or is it a problem that they are mini SAS and I'd need something else to connect up with the backplane. For SATA drives do you just use mini SAS breakout cables from the backplane to the HDDs? Also right now I have these plugged into the x16 PCI-E on my motherboard. One is PCIE-5.0 @ 16 lanes (the one intended for the GPU) and the other is just 4 lanes. Would this provide enough bandwidth to the backplane? Also were there any challenges to wiring both backplanes, does that take two cards or do you wire the two together? My cooler is also way too big right now. I've got intel, LGA 1700, was it easy to know what the max size for that would be? Sorry for all the questions. I'll keep researching but this listing looks good and I'd totally be interested if it's possible with only some slight alterations to my current build. Thanks!
  10. Thanks for the tip. I've added what they mention in that thread to the "Unraid OS" section of System configuration. I didn't see mention of adding it to the other sections but since you mentioned running in safe mode I'm going to add it everywhere. See the attached picture of the config. As for safe mode, I assume you just mean always run in safe mode and never open a chrome / firefox etc tab to the server unless I absolutely need to. I usually have one pinned in chrome actually so I have one open a lot but chrome usually offload's it so it's not active unless I click on it. I like having it available to check out the dashboards but if that causes it to freeze I can just make some other dashboards on things I host off of it. Will reboot now and let the "Unraid OS" config change take effect. I see there is an option to automatically come back in safe mode so I don't need a monitor / keyboard. Thanks!
  11. Survived while I was gone but I'd really appreciate any help diagnosing the freezes.
  12. That link is broken - what did you do to figure out it was the 960 Evo? I am seeing a freeze following an RxError as well, my logs look a lot like yours before the freeze.
  13. Hello, Unraid Version 6.12.4 Woke up this morning to a totally frozen server after it had been up and stable for over 40 days. I will be going on vacation tomorrow for a week and was hopeful that the server would survive without me being there to get it out of jams like this. I have collected syslog and diagnostics and would gratefully appreciate any insights. It looks like this is the time when I had to reboot from the second freeze: Update - the errors start with this: I have posted a screen shot with the details of that PCI Bridge but can't really tell what it is. Syslog from flash and diagnostics attached (diagnostics are from right after the second freeze): syslog haynestower-diagnostics-20240109-0856.zip
  14. I ended up creating a new vDisk and then moving the data disk within Home Assistant to it. So far so good but still curious why this functionality didn't work in unRAID.
  15. Hello, My Home Assistant OS vDisk is starting to get full and I'd like to increase it's size. However when I got to do so it does not save my changes and just snaps it back to what it was sized at before: Click enter or off the text box and poof back to 32G: I also tried qemu-img resize but it does not like my disk file: root@HaynesTower:~# qemu-img resize "/mnt/user/domains/Home Assistant/haos_ova-10.5.vmdk" +96G qemu-img: Image format driver does not support resize I tried qemu-img convert as well which looked like it worked, it hung a while and then completed with nothing output, but the disk size remained the same. Anyone overcome something like this? Thanks!
  16. Hmm, this isn't working for me like it did in that video. When I click enter like he does in that video says at the bottom of the screen "Home Assistant disk capacity has been changed to 128GB" but it stays at 32GB. Allocated doesn't change either and when I start the VM it's still 32G capacity, no sign of the what I added.
  17. Thanks! Building something like that looks like a good way to go. I found this pre-build thing from QNAP that's a bit expensive: https://www.qnap.com/en-us/product/tl-d1600s/specs/package which if I'm reading it correctly comes with an PCIE x8 HBA to plug in the four SAS cables from the JBOD. Would be a plug and play solution, says it works with Ubuntu (Linux) so I'm not 100% sure it would work with unRAID. I'm currently upgrading drives but that's leaving me with a bunch of drives laying around outside of the case.
  18. Hello, I currently have 16 internal HDDs in my array connected via two LSI 9207-8i HBAs in IT mode. One is in PCIE x16 and the other in PCIE x4 and neither bottleneck. My motherboard has another PCIE x4 slot so I could free up the x16 slot to feed more drives. However my case only fits 16 and it would be impossible to max out the array at 30 drives using this case. I've seen some stuff about JBODs using cheap motherboards to power controllers but I haven't seen a tutorials specific to unRAID. I understand a USB enclosure is a bad idea but would something that uses SAS cables be feasible? I don't really know how this would all go but I in my head it would look like this: Anyone actually do this? Is the right answer to just get a bigger case if I want more drives? That's not really something I'm interested in doing yet but maybe in a year or two if I move to a place where I could get a server rack I'd go that route. Anyone know of unRAID specific tutorials? What makes my worried about the tutorials I've seen is they don't talk about how unRAID needs these HBA's in IT Mode and how that would all work. Thanks!
  19. Write seems to be read on the storage plot and read is always zero. It shows this: But my array is doing this: My Installed Version is 2023.02.14.
  20. Thanks for the tip! Looks like my controller is not the bottleneck! Just gotta replace some real slow drives...
  21. Yeah this was a f-up on my part, I thought I read these were x4 or that x4 would be fine. I have 4 on board SATA and can get a different type of card if it's a problem. Maybe just moving 4 HDDs to the on board SATAs would lessen the load of the HBA with 4 lanes? Or maybe it's not a problem at all, I'm going to replace my slowest disk and see what Storage System Stats look like as it rebuilds. Just waiting for a very long preclear to finish.
  22. Thanks! I didn't even realize it was parsing properly and was able to use it to track down that my HBA is only getting x4 lanes instead of x8 and hence is downgraded. Sounds like it still has enough bandwidth at x4 but I may swap it out for on board and PCIE SATA if I still get slow speeds after replacing my 5400rpm drives. Here is the controller debug file. Let me know if there is anything else I can provide to help: manofoz_DiskSpeedControllerDebug_20231009_225142.tar.gz
  23. Ah got it, missed that. Thanks. I'm not sure how to convert from gigatransfers (8GT/s) since I don't see anything advertising the data bus size in bits but I'll take your word for it. If this was a bottleneck I have other options than swapping out the board. There are 4 onboard SATA I could use and I see cheap PCIE x1 cards that add another two SATA ports so that shouldn't be difficult. I could also use the on board 4 SATA which might reduce the load on the LSI card to the point where x4 is not a problem. Not sure if the card would perform better with the remaining four on one SAS to SATA cable or split across both. Still does seem like there is a bottleneck. When running only Exos X20 drives after I pass the point where the smaller drives stop reading I don't get anywhere near the speed I am seeing pre-clearing an X20 (like 190 MB/s vs 280 MB/s). I could just be the slower 5400rpm drives and by the time they are done and it's just X20s I'm near the center of the disks. Preclear on a 20TB still look like 3-4 days at 280 MB/s, I just wasn't expecting things to take this long.
  24. @JorgeB I'm not sure I found the container thread but I do see these nice benchmarks you ran: I grabbed the Dynamix V6 plugin for system stats and will monitor that to make sure I'm not bottlenecked by the controller when I upgrade my next drive. Let me know if I got the right thread.