Jump to content

manofoz

Members
  • Posts

    87
  • Joined

  • Last visited

Posts posted by manofoz

  1. 2 hours ago, jimmorges said:

    I have gone through document but they seem very confusing to me. Could you please share the doc for the exact product.

    Hi @jimmorges - this is the manual I followed for the 36 bay chassis I got on ebay: https://www.supermicro.com/manuals/chassis/4U/SC847BE1C-2C.pdf.

     

    One thing I wish I knew was that it's probably a bit random how many dummy drives you get in the hot swappable bays. I ended up removing dummy drives from some bays to add real drives and then re-adding them to empty bays that didn't have dummy drives when I could have just swapped the trays around. I landed with 24/36 bays so far w/ HDDs and have about 4 dummy drives left so I must have had close to 16 in total. I think they are important for airflow, not sure what I would have done with out them. The chassis also came with plenty of screws for the HDDs but they weren't like any I had so if it didn't I would have had to dig up the Supermicro part # to order some. 

  2. On 3/23/2024 at 3:36 AM, bbrodka said:

    Backplane is 4-lane 8087 connectors, cables were included.  there are mini SAS to 8087 adaptors available, but personally I would just upgrade my HBA to a sas3 one, ot only to eliminate the extra cables, but also bring your speed up yo 12gb/s allowing you more futureproofing upgrades to sas3 12gb/s drives in the future.

     

    You don't need breakout cables, the backplanes handle all of that, it can hadle sas or sata drives, or a mixture of them

     

    There are sas3 inputs and outputs on the backplanes, so you can daisy chain them if desired., I run one HBA to front, and one 8 port HBA to back, but you could use one HBA to run all the bays if desired. (It was how it was wired this way by default)

     

    Low profile cooler and expantion cards are needed as the motherboard cavity is only low profile (Since the lower chassis onter it is used for 12 more drive bays)

     

    As far as bandwith, realistically the only bottleneck have to worry about is during parity checks and rebuilds.

     

    I made the switch yesterday! I was quite nervous because my server was very stable and gets a lot of use but thankfully it was smooth sailing.

     

    All I needed was one LSI Broadcom SAS 9300-8i and a low profile CPU cooler. My temps are great, CPU idling at 35 right now and no disk is any higher.

     

    On full speed the fans that come with this beast are turbines but I had been using the "Fan Auto Control" plug-in and after configuring that for the new fans it quieted down quite a bit. My server is in utility space so some noise doesn't hurt anyone but if it was in living space new fans would be needed. I also happened to have enough four pin fan cable splitters & extenders on hand to get them all plugged in (7 chassis fans + CPU fan was more than my motherboard could handle).

     

    I was able to build out my temporary rack a bit more and it doesn't seem overly strained. I'm not using the rails yet as it's not sturdy enough for those but I have them ready for when I move. For now it's sitting on a diesel shelf. Will need some blank panels to hid the wires but it took me all day until ~2AM yesterday to get this far... 

     

    image.thumb.png.1eb60698057b3967407c132329936437.png

     

  3. On 3/6/2024 at 7:01 PM, MrCrispy said:

    Looking at a similar SM chassis. Does anyone run these as a JBOD DAS? I want to use these with an existing server to add more storage. When you connect a DAS like this how does it turn on/off?

    The DIY JBOD's I've read about look to use a cheap motherboard to plug in an SAS expander so my guess would be powering on via a switch wired to the motherboard. I was thinking about going that route instead of getting one of these but I went with the 36 bay chassis. Since unRAID's array doesn't go over 30 drives I'm not sure I'd go the route of attaching more disks vs. making a standalone NAS if I needed more than that. I'm using 20TB drives so 28 wouldn't be bad assuming the 2 parity drives count against the 30. 

  4. On 3/8/2024 at 11:41 PM, manofoz said:

     

    Thanks! I'll try out those settings you recommended. I am currently using:


    Docker Settings:

    Docker custom network type: macvlan

    Host access to custom networks: Disabled

    IPv4 custom network on interface bond0: Subnet: 192.168.0.0/24 Gateway: 192.168.0.1 DHCP pool: not set

     

    Network Settings:

    Enable bridging: No

     

    Not sure if my IPv4 custom network setting with change to eth0 at some point but it's bond0 now. Other than that it just seems I need to enable bridging and switch to ipvlan. However, I'm still unclear as to what the duplicate IP is trying to host as I can access all of my dockers and VMs without issue. It pops up and goes away after like 4 minutes too which is weird. 

     

    @murkus I'm hesitant to try this - the server is heavily used and I'm not sure how IPv4 custom network on interface bond0: gets changed to eth0. I totally screwed things up last time I fiddled with the network settings (Added SFP+ NIC) and ended up with a monitor and KB&M on the floor sorting it out lol. 

  5. On 3/7/2024 at 12:21 PM, murkus said:

     

    There are useful answers in this new thread:

     

     

     

     

    Thanks! I'll try out those settings you recommended. I am currently using:


    Docker Settings:

    Docker custom network type: macvlan

    Host access to custom networks: Disabled

    IPv4 custom network on interface bond0: Subnet: 192.168.0.0/24 Gateway: 192.168.0.1 DHCP pool: not set

     

    Network Settings:

    Enable bridging: No

     

    Not sure if my IPv4 custom network setting with change to eth0 at some point but it's bond0 now. Other than that it just seems I need to enable bridging and switch to ipvlan. However, I'm still unclear as to what the duplicate IP is trying to host as I can access all of my dockers and VMs without issue. It pops up and goes away after like 4 minutes too which is weird. 

  6. I suspect this happened with my old router and it just didn't care. Ubiquiti is having a fit about it which makes it very noticeable but I have not noticed any ill effects. All my VMs have their own IPs and can be accessed. All docker containers can be accessed from the IP of the server with the port configured in the template. No idea why vhost0 is presenting itself as a different network adapter with the same IP.

    image.thumb.png.a332e850d2820d2a648bd39a853048a0.png

  7. Hello,

    Just switched to a Dream Machine SE and I am getting errors about a duplicate IP. It's two mac addresses both owned by the unRAID server.

    image.thumb.png.dca172e99b6498b3c1b24a5ff73ca254.png

     

    The stuff I've found online mostly points to disabling "Host Access to Custom Networks" for docker which was already disabled:
    image.thumb.png.7a218ae3679a1927ea7b9818c6a37b62.png

     

    With ifconfig I see bond0 and eth0 assigned to the mac address of the servers network card:

    bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
            inet 192.168.0.200  netmask 255.255.255.0  broadcast 0.0.0.0
            ether e4:1d:2d:24:39:c0  txqueuelen 1000  (Ethernet)
            RX packets 1248780799  bytes 1808446219605 (1.6 TiB)
            RX errors 0  dropped 22179  overruns 454  frame 0
            TX packets 205059076  bytes 143180099899 (133.3 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    	
    eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
            ether e4:1d:2d:24:39:c0  txqueuelen 1000  (Ethernet)
            RX packets 1248780799  bytes 1808446219605 (1.6 TiB)
            RX errors 0  dropped 454  overruns 454  frame 0
            TX packets 205059076  bytes 143180099899 (133.3 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0	

     

    However, for the mystery mac address I see vhost0.

    vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            ether 02:fb:51:73:ed:04  txqueuelen 500  (Ethernet)
            RX packets 8869581  bytes 29481304941 (27.4 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 3974  bytes 166908 (162.9 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

     

    I do have a Windows 11 and Howe Assistant VM running but both have their own IPs. What is weird is that this duplicate IP w/ the same mac does not appear for long. It goes away for a while and then comes back seemingly randomly. 

    image.png

    • Like 1
  8. 6 hours ago, OrneryTaurus said:

     

    I have concerns about this rack being able to support the weight of a 4U server. You'll want to double check the maximum loads the rack supports. It might require permanent installation to increase the maximum load it can handle.

     

    It may be running up against the limit, it was dirt cheap and says 500 lbs "max load bearing" on eBay. Not sure what it is when it's on wheels. 

     

    Not really sure how to weigh the servers but I'd say the current build is around 100 lbs. Didn't think the 4U would add too much more weight until I added more drives and I'd be putting it at the bottom. 

     

    Other than that I was going to put a dream machine special edition and a NVR pro on it. For the new house I was going to get something like this:
     

    image.png.a922511dd4f5b113d685a6842e2d41e1.png

     

    I think I can have it delivered to the garage and then have the movers put it at the termination point of our ethernet cable drops. After assembling the little one I don't really want to assemble a 300lb one... 

  9. 1 hour ago, OrneryTaurus said:

     

    Thanks! This was great information, I think I have a plan on what exactly to do. I'll be moving in 9+ months when construction is done and plan to have a 42U rack at the new place.

     

    Starting with some networking equipment tomorrow I'll be provisioning what I can before I move. Will totally grab one of these, my Define 7 XL doesn't fit great in the small rack I grabbed to stage everything...

     

    rack.thumb.png.1dca284037abe93839de5e80ab6122b0.png

  10. On 6/13/2023 at 7:01 AM, bbrodka said:

    Not related to seller, just reporting good expericence and price.

    If you're looking for a 36 BAY server chasis these are pretty nice, this seller has included 2 hot swap -SQ power supplies whicj are the super quiet versions, and both the fron 24 port and back 12 port backplanes are expanders meaning you can get away with a low port HDA card and don't need external expanders.
    Both backplains are SAS3/SATA3

    https://www.supermicro.com/manuals/other/BPN-SAS3-846EL.pdf

    https://www.supermicro.com/manuals/other/BPN-SAS3-826EL.pdf

     

    36x 3.5" LFF Hard Drive Trays w/ Screws Included

     

    Sold as used, but could of passed as new, I couldn't find any hint of prior use, no dust or blemishishs anywhere!

    No rack mount rails, but I double many of us are rack mounting our stuff


    https://www.ebay.com/itm/204179662584
    Keep in mind the motherboard area is low profile due to the 12 ports under it in the back.

    Standard motheboards can be used with an optional cable to connect the front pannel if needed
    https://www.ebay.com/itm/255893202956

     

    Hey, this is a great recommendation. I'd love to move to one of these as I am now running 19 HDDs in a fractal define 7 XL and I can't shove anymore in that thing. I have a few questions holding me back as I don't quite understand what else I'd need to switch over:

     

    • I have two LSI 9207-8i HBA's right now, would those reusable or is it a problem that they are mini SAS and I'd need something else to connect up with the backplane.
    • For SATA drives do you just use mini SAS breakout cables from the backplane to the HDDs?
    • Also right now I have these plugged into the x16 PCI-E on my motherboard. One is PCIE-5.0 @ 16 lanes (the one intended for the GPU) and the other is just 4 lanes. Would this provide enough bandwidth to the backplane? Also were there any challenges to wiring both backplanes, does that take two cards or do you wire the two together?
    • My cooler is also way too big right now. I've got intel, LGA 1700, was it easy to know what the max size for that would be?

    Sorry for all the questions. I'll keep researching but this listing looks good and I'd totally be interested if it's possible with only some slight alterations to my current build

     

    Thanks!

  11. 18 hours ago, JorgeB said:

    For these first try this:

    https://forums.unraid.net/topic/118286-nvme-drives-throwing-errors-filling-logs-instantly-how-to-resolve/?do=findComment&comment=1165009

     

    For these try booting in safe mode and/or closing any browser windows open to the GUI, only open when you need to use it then close again.

     

     

    Thanks for the tip. I've added what they mention in that thread to the "Unraid OS" section of System configuration. I didn't see mention of adding it to the other sections but since you mentioned running in safe mode I'm going to add it everywhere. See the attached picture of the config. 

     

    As for safe mode, I assume you just mean always run in safe mode and never open a chrome / firefox etc tab to the server unless I absolutely need to. I usually have one pinned in chrome actually so I have one open a lot but chrome usually offload's it so it's not active unless I click on it. I like having it available to check out the dashboards but if that causes it to freeze I can just make some other dashboards on things I host off of it.

     

    Will reboot now and let the "Unraid OS" config change take effect. I see there is an option to automatically come back in safe mode so I don't need a monitor / keyboard.

     

    Thanks!

     

    image.png

  12. On 3/28/2021 at 2:11 PM, SleepingJake said:

     

    Thanks JorgeB!  Based off this, looks like the affected device should be the 960 Evo I have installed.   This is a device I am not using presently so this should be nbd, I will get this removed and continue to watch the log on this machine!

     

    Mar 28 04:33:41 i3 kernel: pcieport 0000:00:1d.0: AER: Corrected error received: 0000:0c:00.0
    Mar 28 04:33:41 i3 kernel: nvme 0000:0c:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
    Mar 28 04:33:41 i3 kernel: nvme 0000:0c:00.0:   device [144d:a804] error status/mask=00000001/00006000
    Mar 28 04:33:41 i3 kernel: nvme 0000:0c:00.0:    [ 0] RxErr       

     

    image.thumb.png.cf8cacbc720d6ea48aea8ce6c7841b62.png

    That link is broken - what did you do to figure out it was the 960 Evo? I am seeing a freeze following an RxError as well, my logs look a lot like yours before the freeze.

  13. Hello,

     

    Unraid Version 6.12.4

     

    Woke up this morning to a totally frozen server after it had been up and stable for over 40 days. I will be going on vacation tomorrow for a week and was hopeful that the server would survive without me being there to get it out of jams like this. I have collected syslog and diagnostics and would gratefully appreciate any insights.

     

    It looks like this is the time when I had to reboot from the second freeze:

    Quote

    Jan  9 08:33:52 HaynesTower php-fpm[13589]: [WARNING] [pool www] child 12857 exited on signal 9 (SIGKILL) after 199.092351 seconds from start
    Jan  9 08:33:53 HaynesTower php-fpm[13589]: [WARNING] [pool www] child 14272 exited on signal 9 (SIGKILL) after 181.265295 seconds from start
    Jan  9 08:33:54 HaynesTower kernel: pcieport 0000:00:1a.0: AER: Corrected error received: 0000:00:1a.0
    Jan  9 08:33:54 HaynesTower kernel: pcieport 0000:00:1a.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
    Jan  9 08:33:54 HaynesTower kernel: pcieport 0000:00:1a.0:   device [8086:7a48] error status/mask=00000001/00002000
    Jan  9 08:33:54 HaynesTower kernel: pcieport 0000:00:1a.0:    [ 0] RxErr                 
    Jan  9 08:34:13 HaynesTower php-fpm[13589]: [WARNING] [pool www] child 27609 exited on signal 9 (SIGKILL) after 16.440253 seconds from start
    Jan  9 08:35:05 HaynesTower php-fpm[13589]: [WARNING] [pool www] child 27655 exited on signal 9 (SIGKILL) after 21.825540 seconds from start
    Jan  9 08:36:07 HaynesTower php-fpm[13589]: [WARNING] [pool www] child 28095 exited on signal 9 (SIGKILL) after 72.155603 seconds from start
    Jan  9 08:36:15 HaynesTower php-fpm[13589]: [WARNING] [pool www] child 28422 exited on signal 9 (SIGKILL) after 66.596950 seconds from start
    Jan  9 08:36:22 HaynesTower kernel: pcieport 0000:00:1a.0: AER: Corrected error received: 0000:00:1a.0
    Jan  9 08:36:22 HaynesTower kernel: pcieport 0000:00:1a.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
    Jan  9 08:36:22 HaynesTower kernel: pcieport 0000:00:1a.0:   device [8086:7a48] error status/mask=00000001/00002000
    Jan  9 08:36:22 HaynesTower kernel: pcieport 0000:00:1a.0:    [ 0] RxErr                 
    Jan  9 08:47:21 HaynesTower kernel: microcode: microcode updated early to revision 0x26, date = 2022-09-19
    Jan  9 08:47:21 HaynesTower kernel: Linux version 6.1.49-Unraid (root@Develop-612) (gcc (GCC) 12.2.0, GNU ld version 2.40-slack151) #1 SMP PREEMPT_DYNAMIC Wed Aug 30 09:42:35 PDT 2023
    Jan  9 08:47:21 HaynesTower kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot,/bzroot-gui

     

    Update - the errors start with this:

     

    Quote

    Jan  9 07:53:42 HaynesTower kernel: pcieport 0000:00:1a.0: AER: Corrected error received: 0000:00:1a.0

     

    I have posted a screen shot with the details of that PCI Bridge but can't really tell what it is.

     

    Syslog from flash and diagnostics attached (diagnostics are from right after the second freeze):

    syslog haynestower-diagnostics-20240109-0856.zip

    image.png

  14. Hello,

    My Home Assistant OS vDisk is starting to get full and I'd like to increase it's size. However when I got to do so it does not save my changes and just snaps it back to what it was sized at before:

     

    image.thumb.png.933849d29ca40812f4d38e6585853eb2.png

     

    Click enter or off the text box and poof back to 32G:

    image.png.af348ab190425665dbc7d61291d8f161.png

     

    I also tried qemu-img resize but it does not like my disk file:

     

    root@HaynesTower:~# qemu-img resize "/mnt/user/domains/Home Assistant/haos_ova-10.5.vmdk" +96G
    
    qemu-img: Image format driver does not support resize

     

    I tried qemu-img convert as well which looked like it worked, it hung a while and then completed with nothing output, but the disk size remained the same.

     

    Anyone overcome something like this?

    Thanks!

  15. On 4/6/2022 at 7:54 AM, JonathanM said:

     

    Hmm, this isn't working for me like it did in that video. When I click enter like he does in that video says at the bottom of the screen "Home Assistant disk capacity has been changed to 128GB" but it stays at 32GB. Allocated doesn't change either and when I start the VM it's still 32G capacity, no sign of the what I added. 

    image.thumb.png.21425d3fbc2ec98d1b0c27764eafdce9.png

  16. 5 hours ago, JorgeB said:

     

    Thanks! Building something like that looks like a good way to go. I found this pre-build thing from QNAP that's a bit expensive: https://www.qnap.com/en-us/product/tl-d1600s/specs/package which if I'm reading it correctly comes with an PCIE x8 HBA to plug in the four SAS cables from the JBOD. Would be a plug and play solution, says it works with Ubuntu (Linux) so I'm not 100% sure it would work with unRAID.

     

    I'm currently upgrading drives but that's leaving me with a bunch of drives laying around outside of the case.

  17. Hello,

     

    I currently have 16 internal HDDs in my array connected via two LSI 9207-8i HBAs in IT mode. One is in PCIE x16 and the other in PCIE x4 and neither bottleneck. My motherboard has another PCIE x4 slot so I could free up the x16 slot to feed more drives. However my case only fits 16 and it would be impossible to max out the array at 30 drives using this case. I've seen some stuff about JBODs using cheap motherboards to power controllers but I haven't seen a tutorials specific to unRAID. I understand a USB enclosure is a bad idea but would something that uses SAS cables be feasible? I don't really know how this would all go but I in my head it would look like this:

     

    image.thumb.png.18af163df78d593c92a32497f81e0cd0.png

     

    Anyone actually do this? Is the right answer to just get a bigger case if I want more drives? That's not really something I'm interested in doing yet but maybe in a year or two if I move to a place where I could get a server rack I'd go that route. Anyone know of unRAID specific tutorials? What makes my worried about the tutorials I've seen is they don't talk about how unRAID needs these HBA's in IT Mode and how that would all work.

     

    Thanks!

  18. 4 hours ago, jbartlett said:

     

    It looks like the tools I use to gather the information changed their layout because I see that on my controller too. Will make it easier for me to fix.

     

    Have you tried a controller benchmark yet? It'll tell you if you are reaching capacity of what your controller can support.

     

    image.thumb.png.08892734295ef3f1dbf28d8f3c924fbb.png

     

    Thanks for the tip! Looks like my controller is not the bottleneck! Just gotta replace some real slow drives...

    image.thumb.png.241d5185642b77b72211df860724b335.png

  19. On 10/8/2023 at 4:28 AM, JorgeB said:

    That bottom x16 slot is attached to the PCH, it will always run at x4, you'd need a different board to have both HBAs running at x8.

     

    Yeah this was a f-up on my part, I thought I read these were x4 or that x4 would be fine. I have 4 on board SATA and can get a different type of card if it's a problem. Maybe just moving 4 HDDs to the on board SATAs would lessen the load of the HBA with 4 lanes?

     

    Or maybe it's not a problem at all, I'm going to replace my slowest disk and see what Storage System Stats look like as it rebuilds. Just waiting for a very long preclear to finish. 

  20. On 10/8/2023 at 9:02 PM, jbartlett said:

     

    This display is due to parsing, your system is outputting a display that I haven't encountered and thus didn't properly parse. If you could submit a controller debug file (click on the debug link in the DiskSpeed app at the bottom of the page), I can take a look at it.

     

    Thanks! I didn't even realize it was parsing properly and was able to use it to track down that my HBA is only getting x4 lanes instead of x8 and hence is downgraded. Sounds like it still has enough bandwidth at x4 but I may swap it out for on board and PCIE SATA if I still get slow speeds after replacing my 5400rpm drives.

     

    Here is the controller debug file. Let me know if there is anything else I can provide to help: 

    manofoz_DiskSpeedControllerDebug_20231009_225142.tar.gz

  21. 19 hours ago, JorgeB said:

    I was just referring to the other post you've made in the diskspeed docker thread, I had already replied there.

    Ah got it, missed that. Thanks.

    I'm not sure how to convert from gigatransfers (8GT/s) since I don't see anything advertising the data bus size in bits but I'll take your word for it. If this was a bottleneck I have other options than swapping out the board. There are 4 onboard SATA I could use and I see cheap PCIE x1 cards that add another two SATA ports so that shouldn't be difficult. I could also use the on board 4 SATA which might reduce the load on the LSI card to the point where x4 is not a problem. Not sure if the card would perform better with the remaining four on one SAS to SATA cable or split across both. 

     

    Still does seem like there is a bottleneck. When running only Exos X20 drives after I pass the point where the smaller drives stop reading I don't get anywhere near the speed I am seeing pre-clearing an X20 (like 190 MB/s vs 280 MB/s). I  could just be the slower 5400rpm drives and by the time they are done and it's just X20s I'm near the center of the disks. Preclear on a 20TB still look like 3-4 days at 280 MB/s, I just wasn't expecting things to take this long.   

×
×
  • Create New...