SergeantCC4

Members
  • Posts

    93
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Pittsburgh, PA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SergeantCC4's Achievements

Apprentice

Apprentice (3/14)

2

Reputation

1

Community Answers

  1. I'd highly recommend the below video from Spaceinvaderone. I followed that set of operations and was able to remove a handful (about 6) of 2,3, and 4 TB disks and condense them down to 1 or 2 16TB disks. The one think you'll want to keep in mind is when you go to remove a disk if it's not on the tail of your array you could have a number gap where for example you may have something in slots 1-5 and 9-15 and nothing in between. This only matters if you're trying to maintain parity throughout. If not you can follow the first part where you just move your data and can arrange the disks however you'd like.
  2. Thanks @JorgeB I was able to backup my files to the array, reformat the cache pool, and migrate my data back. Some strange networking bugs with wireguard happened but they seemed to fix themselves when I upgraded to 6.11.5. I read the post you linked and set up an hourly check using User Scripts. I also read a few other forums about the btrfs issues and set up a weekly balance as it seemed that there is really no harm in doing this? I've had a busy day and didn't get a chance to really look in depth but is there anywhere I can start for a basic understanding of Scrub and Balance and how often if at all they should be run? I'm not sure I understand the purpose of these features.
  3. I saw your post elsewhere to do that so I ran 6 passes total yesterday and it returned zero errors.
  4. With those being relatively new devices (<2 months) should I be worried that something is wrong with them? Was there something I could've done to prevent this?
  5. Recently after updating to 6.11.1 (to correct for the wireguard display glitch) my server would intermittently become unresponsive and I would be forced to perform a hard reboot. This typically occured while I was connected via wireguard to my server (it runs pi hole so I can get benifits remotely). I had previously gotten a docker image corrupt error but when I was unable to stop/delete the image I simply rebooted the server and it seemed to have fixed the problem. I posted on another topic that I thought I was having the same issue but I was mistaken. I tried to run the memtest built into the server but upon selection the server just rebooted and came back to the unraid boot screen. I read I was supposed to enable CSM but for some reason when I did that my server would no long POST. I disconnected my PCIe JBOD cards, and my boot flash and was able to post, and am now currently running memtest86 v10. The first run just finished and hasn't found any errors but I'm going to let it run through a few more cycles. My question is however, I have a dual NVMe protected cache array set up and I'm unsure if I have the right settings, and/or I need to do the balance or scrub. It's worked find during 6.10, and I upgraded to 6.11 to take advantage of the iGPU in the 12500 I just upgraded to. I tried to find some documentation, but it seems to be a little fuzzy (or maybe i'm the fuzzy one) about what to do when, where, and why. Anyone have any recommendations? Thanks in advance! syslog.txt citadel-diagnostics-20221118-2026.zip
  6. I am using neither Qbittoreent or deluge and my server has crashed multiple times over the last two weeks since using 6.11.1. Going to upgrade to 6.11.3 and see if it persists. Let me know if there is any information I can provide to assist. I've attached my diagnostics but it only went back to just the last time I rebooted so I attached a larger cut of the syslog as well. Today's crash occured while I was remotely accessing my server (via wireguard) and couldn't load the docker page, went to the docer settings to try to stop the docker and then the UI stopped responding. That was at about noon today. citadel-diagnostics-20221118-2026.zip syslog.txt
  7. Does v6.11.1 change anything as far as setting up quicksync for Jellyfin? With the most recent updates to Unraid it seems that the process in setting up HW transcode has changed. Is anyone able to clarify or point me in the direction of how to set up Intel QuickSync for a 12th gen iGPU on Jellyfin?
  8. Correct. I have two systems. Both ran 6.10.3: 1. Upgraded to 6.11. wire guard issues started. I can still connect, just has the UI issues as stated in this thread. 2. Still on 6.10.3. no problems at all.
  9. Same for me as well. Ad Blocker disabled too. I'll click one user's settings and it'll show. I'll select done and try another users and get a blank page. Since there is no "done" or "x" icon I can either press escape or refresh the page to return to the wireguard settings page. After a refresh that user's settings randomly will or will not work. Or other ones will work, I'll refresh and they won't. Typically more not working than working. I've tried chrome, Firefox, and edge with no success. Sometimes clicking the "View Peer Config" icon literally does nothing.
  10. Can anyone confirm if this is working (h265, tone mapping, etc) for 12th gen for jellyfin? And if so is there a beta release I need to use or just unRAID 6.11 stable and the latest stable of jellyfin?
  11. Thanks. Iknow I'm being paranoid and picky. As I'm sure you can imagine yourself it would not be fun to drop $1-2k on an upgrade and forget one small detail that bites me on the behind later on.
  12. So I shouldn't have to worry too much about my (eventual) 24 hard drives capping out over the expander to the 9308-8i over PCIe 3.0 x8 link?
  13. I was going to have (eventually) 20-24 Exos drives. I saw in that link you sent that there is a LSI 9207-8i PCIe gen3 x8 card but it says "(4800MB/s)". I'm a little confused. I thought PCIe gen 3 was 1GB/lane. Wouldn't this go up to 8GB total then? I don't know what I'm missing. If that's the case the MB I was looking at has a bunch of x4 mode PCIe 3.0 x16 slots and if I put that card in those and it runs in x4 mode I'll still get ~4GB/s in each slot? And this I can use the 9207. Otherwise if I get the 9308-8i and I was thinking the Intel® SAS3 Expander RES3TV360 I can hook up all 24 hard drives in dual link and get about the same performance as one 24i controller? I'm slightly out of my element with HBAs and expanders and how they work. For example I'm having a hard time understanding dual link. Is it literally just both cables go from the HBA to the expander?
  14. Couldn't that lead to bottlenecking at some point like during parity check/rebuilds? I know it's a small portion of use case and I'm being picky but I'd rather get something I can move to another case and not have to worry about newer tech capping it out.