SergeantCC4

Members
  • Posts

    93
  • Joined

  • Last visited

Posts posted by SergeantCC4

  1. On 10/21/2022 at 3:54 PM, Raki72 said:

     

    Not sure, how to do this.  How can you exclude disks from shares?

    I'd highly recommend the below video from Spaceinvaderone. I followed that set of operations and was able to remove a handful (about 6) of 2,3, and 4 TB disks and condense them down to 1 or 2 16TB disks. The one think you'll want to keep in mind is when you go to remove a disk if it's not on the tail of your array you could have a number gap where for example you may have something in slots 1-5 and 9-15 and nothing in between. This only matters if you're trying to maintain parity throughout. If not you can follow the first part where you just move your data and can arrange the disks however you'd like.

     

     

     

  2. Thanks @JorgeB I was able to backup my files to the array, reformat the cache pool, and migrate my data back. Some strange networking bugs with wireguard happened but they seemed to fix themselves when I upgraded to 6.11.5. I read the post you linked and set up an hourly check using User Scripts.

     

    I also read a few other forums about the btrfs issues and set up a weekly balance as it seemed that there is really no harm in doing this?

     

    I've had a busy day and didn't get a chance to really look in depth but is there anywhere I can start for a basic understanding of Scrub and Balance and how often if at all they should be run? I'm not sure I understand the purpose of these features.

  3. Recently after updating to 6.11.1 (to correct for the wireguard display glitch) my server would intermittently become unresponsive and I would be forced to perform a hard reboot. This typically occured while I was connected via wireguard to my server (it runs pi hole so I can get benifits remotely).

     

    I had previously gotten a docker image corrupt error but when I was unable to stop/delete the image I simply rebooted the server and it seemed to have fixed the problem. I posted on another topic that I thought I was having the same issue but I was mistaken. I tried to run the memtest built into the server but upon selection the server just rebooted and came back to the unraid boot screen. I read I was supposed to enable CSM but for some reason when I did that my server would no long POST. I disconnected my PCIe JBOD cards, and my boot flash and was able to post, and am now currently running memtest86 v10. The first run just finished and hasn't found any errors but I'm going to let it run through a few more cycles.

     

    My question is however, I have a dual NVMe protected cache array set up and I'm unsure if I have the right settings, and/or I need to do the balance or scrub. It's worked find during 6.10, and I upgraded to 6.11 to take advantage of the iGPU in the 12500 I just upgraded to.  I tried to find some documentation, but it seems to be a little fuzzy (or maybe i'm the fuzzy one) about what to do when, where, and why. Anyone have any recommendations?

     

    Thanks in advance!

    syslog.txt citadel-diagnostics-20221118-2026.zip

  4. On 10/7/2022 at 7:22 PM, NoBigCat said:

    Set the Intel GPU value to /dev/dri ,or just add a device name it Intel GPU and value /dev/dri

    In Jellyfin under admin>playback I changed the transcode to intel quicksync, made 

    sure to checkmark enable hardware encoding. 


    I have all the formats check marked ,I semi remember looking at what the 12th gen igpu supported.(could be wrong on this, so double check for yourself).

    Also have tone mapping enabled ( not the vpp) I dont have any files that need it so haven't really messed with tone mapping. or looked to understand what the difference is between the two. I must have enabled for nvidia and left it at that. Maybe the vpp is the right choice?

     

    Saved and that's it for me it worked after that. To check if its using intel I play a video, lower the quality so it has to transcode, then open up a ssh to unraid and use the command intel_gpu_top and see if it's using it.

    "Set the Intel GPU value to /dev/dri" how do...?

  5. 2 hours ago, JorgeB said:

    I got around 5500MB/s total with the 9300-8i and Intel RES3TV360, because they are SATA devices they will never reach quite max PCIe 3.0 device, for that you'd need SAS3 devices, still good speed for 24 devices

    So I shouldn't have to worry too much about my (eventual) 24 hard drives capping out over the expander to the 9308-8i over PCIe 3.0 x8 link?

  6. 3 hours ago, JorgeB said:

    Depends on the controller/expander, see here for some actual numbers.

    I was going to have (eventually) 20-24 Exos drives. I saw in that link you sent that there is a LSI 9207-8i PCIe gen3 x8 card but it says "(4800MB/s)". I'm a little confused. I thought PCIe gen 3 was 1GB/lane. Wouldn't this go up to 8GB total then? I don't know what I'm missing.

     

    If that's the case the MB I was looking at has a bunch of x4 mode PCIe 3.0 x16 slots and if I put that card in those and it runs in x4 mode I'll still get ~4GB/s in each slot? And this I can use the 9207.

     

    Otherwise if I get the 9308-8i and I was thinking the Intel® SAS3 Expander RES3TV360 I can hook up all 24 hard drives in dual link and get about the same performance as one 24i controller? 

     

    I'm slightly out of my element with HBAs and expanders and how they work. For example I'm having a hard time understanding dual link. Is it literally just both cables go from the HBA to the expander?

  7. 9 minutes ago, JorgeB said:

    I was basically referring to any other 24 port LSI 24, if there are any, not sure there is one from the 9400 series, another option is an 8 port HBA with an expander.

    Couldn't that lead to bottlenecking at some point like during parity check/rebuilds? I know it's a small portion of use case and I'm being picky but I'd rather get something I can move to another case and not have to worry about newer tech capping it out.

  8. On 7/11/2022 at 7:51 PM, johnnyfive said:

      I was given some slow 4gb ddr3 ram.    I currently am using 16GB (2x8gb) ddr3 ram (this was my old gaming computer, don't remember the speed, but likely better than the 4gb modules I got for free).  Current Idle use as I look is 25%.  Is more ram worth it? or 16GB would be more stable/faster.

    To clarify you have 16GB of each (4x4gb vs 2x8gb)? And you're using 25% of the 16GB? 

     

    On 7/11/2022 at 7:51 PM, johnnyfive said:

    2nd Question.

    Currently without a graphics card installed, as I was using the 2500k without any passthrough.  is it worth putting something old inside?  I think I have a 4300 or a 7450 or a 8800 gt 512

    Those graphics cards are older than the CPU (by a lot in some cases) IMO I wouldn't use them. 

     

    On 7/11/2022 at 7:51 PM, johnnyfive said:

    Last Question... Is it possible to run a windows VM without an additional graphics card?  and view the vm through a web browser (like remote desktop or something better...

    I'm not to familiar with Virtualization in unRAID as I'm trying to get that started up myself. But I did notice that some Virtualization is not compatible with your CPU:

    http://ark.intel.com/products/52210

    It seems VT-d (needed for PCIe passthrough) is not supported but general virtualization (VT-x) is.

     

    Spaceinvader One has some great videos about doing VMs. This one is a little old but you can see if that gets you to where you want.

  9. On 1/29/2021 at 7:57 AM, JorgeB said:

    If more ports are needed you can use multiple controllers, controllers with more ports (there are 16 and 24 port LSI HBAs, like the 9201-16i, 9305-16i, 9305-24i, etc) or use one LSI HBA connected to a SAS expander, like the Intel RES2SV240 or HP SAS expander.

     I'm looking towards the more ports, less cards approach and you said like the "...9305-24i, etc" what other 24 port cards can you think of off of the top of your head. the 9305-24i cards i founc are available on newegg and/or amazon but i'm not sure if I can trust the sellers and I checked Art of Server on ebay and they don't have any currently.

  10. 2 hours ago, ryujin921 said:

    About the nvme's, you're better off with SK hynix P41. Faster (and currently cheaper) than them firecudas.

    I'm having some difficulty finding the 2tb version of the P41 currently. Although I did read up on some reviews and I think the reason I chose the Firecuda was because of it's drastically higher TBW lifespan. I know practically it'll never get hit but I don't want to take any chances down the road for a slight increase in cost.

     

    2 hours ago, ryujin921 said:

    About the few pcie slots, you can always use m.2 to pcie riser cables. one of them is already shared with the gen3 m.2 slot btw

    Yeah I know I'm trying to be both a beggar and a chooser but I'm about to say screw it and get something stupid like a -24i HBA and just slap that into the first slot and call it a day... the lack of PCI lanes on processors (or the addition of multiple m.2 slots in conjuction with PCIe slots) is killing me....

  11. Thanks again. This cleared up a lot of stuff.

     

    Do you or anyone else have any good fan recommendations for a Norco 4224? I swapped out the mid wall for a 3 x 120mm fan wall and I previously swapped out the 2 80mm fans in the back but the temps are getting higher than I'd like and wanted to see people's recommendations for replacements.

  12. @Vr2Io First off thanks again for your quick responses. This is exactly the kind of answers/suggestions I'm looking for.

     

    8 hours ago, Vr2Io said:

    That's fine, but the cost really high, other then that should be OK ( of course you need two x8 slot )

    Yeah the motherboards I'm looking at all have x16 slots and are either x4 or x8 electrical (the one I have in my pcpartpicker list is x4 for the second two slots at PCIe 3.0). Which leads me to the realization of my next point.

     

    8 hours ago, Vr2Io said:

    If each 9211-8i conenct 8 disks, then ~4GB/s /8 = 500MB/s per disks, this far more then 7200 spinner disk real speed. For 9305 in double bandwidth but also double disk count, actual bandwidth per disk won't change.

    I'm not sure why, especially since I've had the three 9211s for about 6+ years, that I'm realizing that they are x8 cards. For some reason I thought they were always x4 cards and as a result I believed they needed upgraded. So I think I will short term keep the 3 x 9211s and rethink the MB configuration to get more x8 mode slots.

     

    8 hours ago, Vr2Io said:

    I mean limit the build in less then 16bay use 3u / 4u case. In my build combination 9300 + 82885t, let say connect in dual link i.e. 4e+4e, then you still can connect 28 disks internal without blocking bandwidth. But cost reduce a lot ( This depends on what price you got those HW ), the expander can place anywhere to free PCIe slot, you just need provide external power to it.

    This is where my expertise is limited. I'm not sure about what HBA or expander does what as I'm not caught up in that field. I basically have to read forums until someone mentions a card and then do research to see if it'll work for my use case and it's really slowing me down. Is the expander to allow me to have two cases share hard drives on the same unRAID array?

  13. 9 hours ago, Vr2Io said:

    All work, but I properly won't build in this way

    - Two LSI 9305-16i really expensive, it just used to connect 24 harddisks

    - 6 SAS-SATA cable

    - DDR5 almost double price compare to DDR4

    The two LSI 9305-16i are an idea I had for two reasons. The first was that I wanted down the road to get another case. With Norco being extinct at the moment I wanted to look at other options which may have me splitting the two 9305-16i's into two chassis or getting one that had 30+ slots in which case I would need the additional capabilities of the 2 x 9305s. I don't want to go that much past 30 because that seems like a lot of disks to have protected by only two drives.

     

    9 hours ago, Vr2Io said:

     In your case, I will use back 9211-8i as 9305-16i just help free up one PCIe slot and no performance gain.

    Second, and correct me if I'm wrong, but the 9305 is PCIe 3.0 card vs the 9211 being PCIe 2.0. So if I eventually use all of the slots or even if I don't the 9305 has 2x the bandwidth so once I upgrade to all 7200rpm drives the overall system speed could be faster for reads (I know writes are still parity limited). That would enable me to have greater multi-disk reads without worrying about bandwidth issues.

     

    The DDR5 thing I agree with 100%. it is crazy expensive right now which is why I was also considering waiting until Raptor Lake or Zen 4 to see if the prices go down????

     

    9 hours ago, Vr2Io said:

    Could you estimate does 16 bay already enough, let say 14x16TB =  224TB data capacity. Because if you limit to 16 disks, then you can avoid expensive high port count HBA.

    Could you clarify this?

    The Norco 4224 chassis that I have has 24 slots so (subtracting two disks for double parity) 22x16 would give me ~350TB. As this is a build I hope to not have to change anything out for the foreseeable future I wanted to have the maximum amount of expandibility as possible.

  14. Could anyone please tell me if this will work?

     

    I've finally gotten around to realizing that my old Intel G630 wasn't cutting it anymore and decided on a rather large upgrade. After lasting 10+ years I want to spec out an upgrade that'll last the foreseeable future (perhaps not 10 years but that would be nice).

     

    My budget is very lose. I've already made my mind up to the fact that I'll probably be spending $3-4k+ on these upgrades. My current build is as follows:

    Case: Norco 4224

    PSU: EVGA 850W 80+ Gold (can't remember the model but it's about 3 years old)

    HDD: Mix and match ranging from 3 to 16tb (total of 21 disks).

    Cache: 600GB WD Velociraptor

    Midwall Fans: 3x Cougar 120mm HDB

    MB/RAM/CPU are all going to be replaced as well as my 3x LSI 9211 HBA.

     

    The parts I have selected so far are:

    https://pcpartpicker.com/list/sP9MFg

    2x LSI 9305-16i

    I expect to upgrade eventually to all Seagate Exos 16TB or larger unless recommended otherwise.

    Unknown fans (see below)

     

    What I wanted to know is if anyone sees anything they'd recommend me changing or if they notice anything that doesn't match up. Specifically I wanted to pick people's minds about my choice of the HBAs. I choose the MB in the part picker list because I wanted a 12th gen Intel CPU (for its iGPU). That combined with a desired for a 10GbE port and 2 or more M.2 slots let me to that board. It however only has 3x PCIe slots (1xPCIe 5.0 2xPCIe 3.0).

     

    I'm going to get the 2xSeagate Firecudas for Cache and VMs and I'm going to use this server primarily to run Plex/Jellyfin. I pretty much don't want a bottleneck anywhere besides maybe the number of concurrent streams.

     

    Also if anyone recommends 80mm or 120mm fans for the 4224 that would pull air through the hard drives that would be great as my HDD temps are getting up to 52+ during parity and that seems kind of warm to me.

     

     

     

  15. I appear to be having an issue I just noticed recently. I was out of town for business for two months and came back to discover that my disk2 (sdr) had 1024 errors that occured 1 second after a parity check started.

     

    The parity check completed without incident, the disk is not marked as failed currently, and I checked the S.M.A.R.T. data and there don't appear to be errors. I've attached a clipping from my syslog from when that happened (diskerrors.txt), as well as the diagnostics. Any guidance for this matter would be appreciated. 

     

    The drive in question is a WD60EZAZ which is an SMR drive. I know people in the past has said that they're OK to use so if it's just a known issue and nothing to worry about that would be great. However, if anyone could let me know how to clear/correct those errors if the drive is good, or if someone could help me decide if the drive is bad.

     

    P.S. I ran a second parity check and it found zero errors as well.

    DiskErrors.txt server1-diagnostics-20220424-2210.zip

  16. Anyone have any idea what might be causing this? Randomly we'll have one person drop files onto the server (having logged in with their user credentials) and another person (with their own r/w credentials for that same share) will be unable to access the files unless I run the "New Permissions" function in Tools. This is become increasingly more problematic as we have to wait for this script to finish before we can access files thus defeating the whole purpose of having the server in the first place.

     

    Any help would be GREATLY appreciated. Thanks in advance.