SergeantCC4

Members
  • Posts

    93
  • Joined

  • Last visited

Everything posted by SergeantCC4

  1. I'd highly recommend the below video from Spaceinvaderone. I followed that set of operations and was able to remove a handful (about 6) of 2,3, and 4 TB disks and condense them down to 1 or 2 16TB disks. The one think you'll want to keep in mind is when you go to remove a disk if it's not on the tail of your array you could have a number gap where for example you may have something in slots 1-5 and 9-15 and nothing in between. This only matters if you're trying to maintain parity throughout. If not you can follow the first part where you just move your data and can arrange the disks however you'd like.
  2. Thanks @JorgeB I was able to backup my files to the array, reformat the cache pool, and migrate my data back. Some strange networking bugs with wireguard happened but they seemed to fix themselves when I upgraded to 6.11.5. I read the post you linked and set up an hourly check using User Scripts. I also read a few other forums about the btrfs issues and set up a weekly balance as it seemed that there is really no harm in doing this? I've had a busy day and didn't get a chance to really look in depth but is there anywhere I can start for a basic understanding of Scrub and Balance and how often if at all they should be run? I'm not sure I understand the purpose of these features.
  3. I saw your post elsewhere to do that so I ran 6 passes total yesterday and it returned zero errors.
  4. With those being relatively new devices (<2 months) should I be worried that something is wrong with them? Was there something I could've done to prevent this?
  5. Recently after updating to 6.11.1 (to correct for the wireguard display glitch) my server would intermittently become unresponsive and I would be forced to perform a hard reboot. This typically occured while I was connected via wireguard to my server (it runs pi hole so I can get benifits remotely). I had previously gotten a docker image corrupt error but when I was unable to stop/delete the image I simply rebooted the server and it seemed to have fixed the problem. I posted on another topic that I thought I was having the same issue but I was mistaken. I tried to run the memtest built into the server but upon selection the server just rebooted and came back to the unraid boot screen. I read I was supposed to enable CSM but for some reason when I did that my server would no long POST. I disconnected my PCIe JBOD cards, and my boot flash and was able to post, and am now currently running memtest86 v10. The first run just finished and hasn't found any errors but I'm going to let it run through a few more cycles. My question is however, I have a dual NVMe protected cache array set up and I'm unsure if I have the right settings, and/or I need to do the balance or scrub. It's worked find during 6.10, and I upgraded to 6.11 to take advantage of the iGPU in the 12500 I just upgraded to. I tried to find some documentation, but it seems to be a little fuzzy (or maybe i'm the fuzzy one) about what to do when, where, and why. Anyone have any recommendations? Thanks in advance! syslog.txt citadel-diagnostics-20221118-2026.zip
  6. I am using neither Qbittoreent or deluge and my server has crashed multiple times over the last two weeks since using 6.11.1. Going to upgrade to 6.11.3 and see if it persists. Let me know if there is any information I can provide to assist. I've attached my diagnostics but it only went back to just the last time I rebooted so I attached a larger cut of the syslog as well. Today's crash occured while I was remotely accessing my server (via wireguard) and couldn't load the docker page, went to the docer settings to try to stop the docker and then the UI stopped responding. That was at about noon today. citadel-diagnostics-20221118-2026.zip syslog.txt
  7. Does v6.11.1 change anything as far as setting up quicksync for Jellyfin? With the most recent updates to Unraid it seems that the process in setting up HW transcode has changed. Is anyone able to clarify or point me in the direction of how to set up Intel QuickSync for a 12th gen iGPU on Jellyfin?
  8. Correct. I have two systems. Both ran 6.10.3: 1. Upgraded to 6.11. wire guard issues started. I can still connect, just has the UI issues as stated in this thread. 2. Still on 6.10.3. no problems at all.
  9. Same for me as well. Ad Blocker disabled too. I'll click one user's settings and it'll show. I'll select done and try another users and get a blank page. Since there is no "done" or "x" icon I can either press escape or refresh the page to return to the wireguard settings page. After a refresh that user's settings randomly will or will not work. Or other ones will work, I'll refresh and they won't. Typically more not working than working. I've tried chrome, Firefox, and edge with no success. Sometimes clicking the "View Peer Config" icon literally does nothing.
  10. Can anyone confirm if this is working (h265, tone mapping, etc) for 12th gen for jellyfin? And if so is there a beta release I need to use or just unRAID 6.11 stable and the latest stable of jellyfin?
  11. Thanks. Iknow I'm being paranoid and picky. As I'm sure you can imagine yourself it would not be fun to drop $1-2k on an upgrade and forget one small detail that bites me on the behind later on.
  12. So I shouldn't have to worry too much about my (eventual) 24 hard drives capping out over the expander to the 9308-8i over PCIe 3.0 x8 link?
  13. I was going to have (eventually) 20-24 Exos drives. I saw in that link you sent that there is a LSI 9207-8i PCIe gen3 x8 card but it says "(4800MB/s)". I'm a little confused. I thought PCIe gen 3 was 1GB/lane. Wouldn't this go up to 8GB total then? I don't know what I'm missing. If that's the case the MB I was looking at has a bunch of x4 mode PCIe 3.0 x16 slots and if I put that card in those and it runs in x4 mode I'll still get ~4GB/s in each slot? And this I can use the 9207. Otherwise if I get the 9308-8i and I was thinking the Intel® SAS3 Expander RES3TV360 I can hook up all 24 hard drives in dual link and get about the same performance as one 24i controller? I'm slightly out of my element with HBAs and expanders and how they work. For example I'm having a hard time understanding dual link. Is it literally just both cables go from the HBA to the expander?
  14. Couldn't that lead to bottlenecking at some point like during parity check/rebuilds? I know it's a small portion of use case and I'm being picky but I'd rather get something I can move to another case and not have to worry about newer tech capping it out.
  15. To clarify you have 16GB of each (4x4gb vs 2x8gb)? And you're using 25% of the 16GB? Those graphics cards are older than the CPU (by a lot in some cases) IMO I wouldn't use them. I'm not to familiar with Virtualization in unRAID as I'm trying to get that started up myself. But I did notice that some Virtualization is not compatible with your CPU: http://ark.intel.com/products/52210 It seems VT-d (needed for PCIe passthrough) is not supported but general virtualization (VT-x) is. Spaceinvader One has some great videos about doing VMs. This one is a little old but you can see if that gets you to where you want.
  16. I'm looking towards the more ports, less cards approach and you said like the "...9305-24i, etc" what other 24 port cards can you think of off of the top of your head. the 9305-24i cards i founc are available on newegg and/or amazon but i'm not sure if I can trust the sellers and I checked Art of Server on ebay and they don't have any currently.
  17. I'm having some difficulty finding the 2tb version of the P41 currently. Although I did read up on some reviews and I think the reason I chose the Firecuda was because of it's drastically higher TBW lifespan. I know practically it'll never get hit but I don't want to take any chances down the road for a slight increase in cost. Yeah I know I'm trying to be both a beggar and a chooser but I'm about to say screw it and get something stupid like a -24i HBA and just slap that into the first slot and call it a day... the lack of PCI lanes on processors (or the addition of multiple m.2 slots in conjuction with PCIe slots) is killing me....
  18. Thanks again. This cleared up a lot of stuff. Do you or anyone else have any good fan recommendations for a Norco 4224? I swapped out the mid wall for a 3 x 120mm fan wall and I previously swapped out the 2 80mm fans in the back but the temps are getting higher than I'd like and wanted to see people's recommendations for replacements.
  19. @Vr2Io First off thanks again for your quick responses. This is exactly the kind of answers/suggestions I'm looking for. Yeah the motherboards I'm looking at all have x16 slots and are either x4 or x8 electrical (the one I have in my pcpartpicker list is x4 for the second two slots at PCIe 3.0). Which leads me to the realization of my next point. I'm not sure why, especially since I've had the three 9211s for about 6+ years, that I'm realizing that they are x8 cards. For some reason I thought they were always x4 cards and as a result I believed they needed upgraded. So I think I will short term keep the 3 x 9211s and rethink the MB configuration to get more x8 mode slots. This is where my expertise is limited. I'm not sure about what HBA or expander does what as I'm not caught up in that field. I basically have to read forums until someone mentions a card and then do research to see if it'll work for my use case and it's really slowing me down. Is the expander to allow me to have two cases share hard drives on the same unRAID array?
  20. The two LSI 9305-16i are an idea I had for two reasons. The first was that I wanted down the road to get another case. With Norco being extinct at the moment I wanted to look at other options which may have me splitting the two 9305-16i's into two chassis or getting one that had 30+ slots in which case I would need the additional capabilities of the 2 x 9305s. I don't want to go that much past 30 because that seems like a lot of disks to have protected by only two drives. Second, and correct me if I'm wrong, but the 9305 is PCIe 3.0 card vs the 9211 being PCIe 2.0. So if I eventually use all of the slots or even if I don't the 9305 has 2x the bandwidth so once I upgrade to all 7200rpm drives the overall system speed could be faster for reads (I know writes are still parity limited). That would enable me to have greater multi-disk reads without worrying about bandwidth issues. The DDR5 thing I agree with 100%. it is crazy expensive right now which is why I was also considering waiting until Raptor Lake or Zen 4 to see if the prices go down???? Could you clarify this? The Norco 4224 chassis that I have has 24 slots so (subtracting two disks for double parity) 22x16 would give me ~350TB. As this is a build I hope to not have to change anything out for the foreseeable future I wanted to have the maximum amount of expandibility as possible.
  21. Can anyone tell me if this will work or not? And/Or any recommendations?
  22. Turns out when I stopped and started the array it cleared the errors. I ran another parity check to be sure and no errors have been back since.
  23. Could anyone please tell me if this will work? I've finally gotten around to realizing that my old Intel G630 wasn't cutting it anymore and decided on a rather large upgrade. After lasting 10+ years I want to spec out an upgrade that'll last the foreseeable future (perhaps not 10 years but that would be nice). My budget is very lose. I've already made my mind up to the fact that I'll probably be spending $3-4k+ on these upgrades. My current build is as follows: Case: Norco 4224 PSU: EVGA 850W 80+ Gold (can't remember the model but it's about 3 years old) HDD: Mix and match ranging from 3 to 16tb (total of 21 disks). Cache: 600GB WD Velociraptor Midwall Fans: 3x Cougar 120mm HDB MB/RAM/CPU are all going to be replaced as well as my 3x LSI 9211 HBA. The parts I have selected so far are: https://pcpartpicker.com/list/sP9MFg 2x LSI 9305-16i I expect to upgrade eventually to all Seagate Exos 16TB or larger unless recommended otherwise. Unknown fans (see below) What I wanted to know is if anyone sees anything they'd recommend me changing or if they notice anything that doesn't match up. Specifically I wanted to pick people's minds about my choice of the HBAs. I choose the MB in the part picker list because I wanted a 12th gen Intel CPU (for its iGPU). That combined with a desired for a 10GbE port and 2 or more M.2 slots let me to that board. It however only has 3x PCIe slots (1xPCIe 5.0 2xPCIe 3.0). I'm going to get the 2xSeagate Firecudas for Cache and VMs and I'm going to use this server primarily to run Plex/Jellyfin. I pretty much don't want a bottleneck anywhere besides maybe the number of concurrent streams. Also if anyone recommends 80mm or 120mm fans for the 4224 that would pull air through the hard drives that would be great as my HDD temps are getting up to 52+ during parity and that seems kind of warm to me.
  24. Is there a way to clear the listed errors in that case? Edit: The reason I ask is because I keep getting an alert that my array has failed due to those errors.