manofoz

Members
  • Posts

    86
  • Joined

  • Last visited

About manofoz

  • Birthday January 11

Retained

  • Member Title
    manofoz

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

manofoz's Achievements

Rookie

Rookie (2/14)

2

Reputation

4

Community Answers

  1. I made the switch yesterday! I was quite nervous because my server was very stable and gets a lot of use but thankfully it was smooth sailing. All I needed was one LSI Broadcom SAS 9300-8i and a low profile CPU cooler. My temps are great, CPU idling at 35 right now and no disk is any higher. On full speed the fans that come with this beast are turbines but I had been using the "Fan Auto Control" plug-in and after configuring that for the new fans it quieted down quite a bit. My server is in utility space so some noise doesn't hurt anyone but if it was in living space new fans would be needed. I also happened to have enough four pin fan cable splitters & extenders on hand to get them all plugged in (7 chassis fans + CPU fan was more than my motherboard could handle). I was able to build out my temporary rack a bit more and it doesn't seem overly strained. I'm not using the rails yet as it's not sturdy enough for those but I have them ready for when I move. For now it's sitting on a diesel shelf. Will need some blank panels to hid the wires but it took me all day until ~2AM yesterday to get this far...
  2. The DIY JBOD's I've read about look to use a cheap motherboard to plug in an SAS expander so my guess would be powering on via a switch wired to the motherboard. I was thinking about going that route instead of getting one of these but I went with the 36 bay chassis. Since unRAID's array doesn't go over 30 drives I'm not sure I'd go the route of attaching more disks vs. making a standalone NAS if I needed more than that. I'm using 20TB drives so 28 wouldn't be bad assuming the 2 parity drives count against the 30.
  3. @murkus I'm hesitant to try this - the server is heavily used and I'm not sure how IPv4 custom network on interface bond0: gets changed to eth0. I totally screwed things up last time I fiddled with the network settings (Added SFP+ NIC) and ended up with a monitor and KB&M on the floor sorting it out lol.
  4. Thanks! I'll try out those settings you recommended. I am currently using: Docker Settings: Docker custom network type: macvlan Host access to custom networks: Disabled IPv4 custom network on interface bond0: Subnet: 192.168.0.0/24 Gateway: 192.168.0.1 DHCP pool: not set Network Settings: Enable bridging: No Not sure if my IPv4 custom network setting with change to eth0 at some point but it's bond0 now. Other than that it just seems I need to enable bridging and switch to ipvlan. However, I'm still unclear as to what the duplicate IP is trying to host as I can access all of my dockers and VMs without issue. It pops up and goes away after like 4 minutes too which is weird.
  5. I suspect this happened with my old router and it just didn't care. Ubiquiti is having a fit about it which makes it very noticeable but I have not noticed any ill effects. All my VMs have their own IPs and can be accessed. All docker containers can be accessed from the IP of the server with the port configured in the template. No idea why vhost0 is presenting itself as a different network adapter with the same IP.
  6. Hello, Just switched to a Dream Machine SE and I am getting errors about a duplicate IP. It's two mac addresses both owned by the unRAID server. The stuff I've found online mostly points to disabling "Host Access to Custom Networks" for docker which was already disabled: With ifconfig I see bond0 and eth0 assigned to the mac address of the servers network card: bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 192.168.0.200 netmask 255.255.255.0 broadcast 0.0.0.0 ether e4:1d:2d:24:39:c0 txqueuelen 1000 (Ethernet) RX packets 1248780799 bytes 1808446219605 (1.6 TiB) RX errors 0 dropped 22179 overruns 454 frame 0 TX packets 205059076 bytes 143180099899 (133.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether e4:1d:2d:24:39:c0 txqueuelen 1000 (Ethernet) RX packets 1248780799 bytes 1808446219605 (1.6 TiB) RX errors 0 dropped 454 overruns 454 frame 0 TX packets 205059076 bytes 143180099899 (133.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 However, for the mystery mac address I see vhost0. vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 02:fb:51:73:ed:04 txqueuelen 500 (Ethernet) RX packets 8869581 bytes 29481304941 (27.4 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3974 bytes 166908 (162.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I do have a Windows 11 and Howe Assistant VM running but both have their own IPs. What is weird is that this duplicate IP w/ the same mac does not appear for long. It goes away for a while and then comes back seemingly randomly.
  7. It may be running up against the limit, it was dirt cheap and says 500 lbs "max load bearing" on eBay. Not sure what it is when it's on wheels. Not really sure how to weigh the servers but I'd say the current build is around 100 lbs. Didn't think the 4U would add too much more weight until I added more drives and I'd be putting it at the bottom. Other than that I was going to put a dream machine special edition and a NVR pro on it. For the new house I was going to get something like this: I think I can have it delivered to the garage and then have the movers put it at the termination point of our ethernet cable drops. After assembling the little one I don't really want to assemble a 300lb one...
  8. Thanks! This was great information, I think I have a plan on what exactly to do. I'll be moving in 9+ months when construction is done and plan to have a 42U rack at the new place. Starting with some networking equipment tomorrow I'll be provisioning what I can before I move. Will totally grab one of these, my Define 7 XL doesn't fit great in the small rack I grabbed to stage everything...
  9. Hey, this is a great recommendation. I'd love to move to one of these as I am now running 19 HDDs in a fractal define 7 XL and I can't shove anymore in that thing. I have a few questions holding me back as I don't quite understand what else I'd need to switch over: I have two LSI 9207-8i HBA's right now, would those reusable or is it a problem that they are mini SAS and I'd need something else to connect up with the backplane. For SATA drives do you just use mini SAS breakout cables from the backplane to the HDDs? Also right now I have these plugged into the x16 PCI-E on my motherboard. One is PCIE-5.0 @ 16 lanes (the one intended for the GPU) and the other is just 4 lanes. Would this provide enough bandwidth to the backplane? Also were there any challenges to wiring both backplanes, does that take two cards or do you wire the two together? My cooler is also way too big right now. I've got intel, LGA 1700, was it easy to know what the max size for that would be? Sorry for all the questions. I'll keep researching but this listing looks good and I'd totally be interested if it's possible with only some slight alterations to my current build. Thanks!
  10. Thanks for the tip. I've added what they mention in that thread to the "Unraid OS" section of System configuration. I didn't see mention of adding it to the other sections but since you mentioned running in safe mode I'm going to add it everywhere. See the attached picture of the config. As for safe mode, I assume you just mean always run in safe mode and never open a chrome / firefox etc tab to the server unless I absolutely need to. I usually have one pinned in chrome actually so I have one open a lot but chrome usually offload's it so it's not active unless I click on it. I like having it available to check out the dashboards but if that causes it to freeze I can just make some other dashboards on things I host off of it. Will reboot now and let the "Unraid OS" config change take effect. I see there is an option to automatically come back in safe mode so I don't need a monitor / keyboard. Thanks!
  11. Survived while I was gone but I'd really appreciate any help diagnosing the freezes.
  12. That link is broken - what did you do to figure out it was the 960 Evo? I am seeing a freeze following an RxError as well, my logs look a lot like yours before the freeze.
  13. Hello, Unraid Version 6.12.4 Woke up this morning to a totally frozen server after it had been up and stable for over 40 days. I will be going on vacation tomorrow for a week and was hopeful that the server would survive without me being there to get it out of jams like this. I have collected syslog and diagnostics and would gratefully appreciate any insights. It looks like this is the time when I had to reboot from the second freeze: Update - the errors start with this: I have posted a screen shot with the details of that PCI Bridge but can't really tell what it is. Syslog from flash and diagnostics attached (diagnostics are from right after the second freeze): syslog haynestower-diagnostics-20240109-0856.zip
  14. I ended up creating a new vDisk and then moving the data disk within Home Assistant to it. So far so good but still curious why this functionality didn't work in unRAID.