Jump to content

rodan5150

Members
  • Posts

    34
  • Joined

  • Last visited

Posts posted by rodan5150

  1. First thing, your containers need to have paths mapped like this:

    Container path: /data

    Host path: /mnt/user/data/*******

     

    I'd get rid of any other path mappings for anything media related, obviously keep the appdata path mappings and such. This is for all of the *arrs that would be manipulating files as well as sabnzbd and qbittorrent. This is so that Docker will treat it as one file system, so that atomic moves can happen. If the paths vary at all i.e. your container path on one is "/data" and on another it is "/data/media" this is seen as a separate file system by Docker, and thus no atomic moves.

     

    Your Sonarr and Radarr appear to have superfluous mappings to the same stuff, like your /media. There is no need for this. You just pass the "/data" path per above. That would go for Plex as well for that matter (not for atomic moves, but just for simplicity and consistency sake). If I'm setting this up for someone, I typically delete the default mappings to avoid confusion later. All I ever leave is the "/data" or equivalent. Once you have the correct path passed to the container, you just "drill down" to the proper sub directory within the app itself. e.g. in Plex the container needs to have the host mapping "/data/media" so you can internally within Plex, point the TV library to "/data/media/tv" and the movie library to "/data/media/movies" etc. Make sense?

     

    Another tip is to use all lower case, being that linux is case sensitive. TV is different from tv, Movies is different from movies, etc. So it is very easy to make a mistakes with path mappings if you mix case Windows style. For example, your radarr is pointed to a host path of "/mnt/user/data/media/Movies" which probably doesn't exist if you made it as "movies" following the trash guide.

     

    Straighten all of that out and see if it works. If you have directory permission errors, you can try running the "Docker Safe New Perms" tools under Tools.

     

    Hope this helps, and good luck!

     

    • Like 1
  2. On 12/13/2021 at 1:58 AM, ich777 said:

    That's really strange, I've also tried it now and it doesn't work either, but I also have to say that it seems now I've messed up my Jellyfin installation because I can't tone map any more, will look into this.

    Thanks again for your help. No rush on this, I'm primarily a Plex user, I'm just in testing phase with Jellyfin at this point. Looking to have a backup option for Plex. I've been a Plex user for a decade or so, and I get worried about the direction they are headed in sometimes. Plus, I love to support the open source community, so Jellyfin makes a lot of sense to me vs Emby.

    • Like 1
  3. I assume you are doing a file transfer and the speed is dropping off? I do not have Mellanox cards, but I do have the same Mikrotik switch. I have no issues sustained reads/writes, as long as the share is set to use cache, which are both NVME as in your setup. For me to get max speed, I have to use an MTU of 9000 for both machines.

     

    The fact that your speed is dropping after a period of time, tells me some sort of buffer is being filled up, would be my guess. Have you done testing with iperf3? You could run a test that is longer than 30s or whatever the time your file transfer drops off. That would prove out that its not the NICs or the switch, since it would run entirely from ram.

     

    If you aren't familiar with iperf3, the nerdtools plugin for unraid has iperf3 as one of the included packages. Once you have it installed, you'd run "iperf3 -s" on your unraid box. You also need iperf3 for windows of course, then you'd do something like "iperf3 -c <ip of unraid box> -t 60 -P 10" for 60 seconds and 10 streams. Adjust accordingly. To test the other direction add a "-R" to the end, which will just reverse the flow so that your windows box becomes the server.

     

    Other things to be sure of are cabling and infrastructure. Are you using copper, fiber, DAC cables? If copper, what type? CAT6, CAT6A? how long are the runs? Lots of factors could come into play. I always start with iperf though. If it tells me I'm good over a particular link, and then I go to do a test file transfer and it is much slower usually a misconfig (not using a share that is assigned to the NVME) or a disk performance issue (transferring a ton of small files like a Plex database kills performance, even on NVME drives).

     

    Hope this helps.

  4. 1 hour ago, jena said:

    "from PC-2.5G NIC to another server with 2.5G NIC" this was tested on 10G switch with two RJ45-SFP+ adapter. The adapter has auto negotiation and can work at 1/2.5/5/10G.

    Also PC and server have default 1500MTU, no jumbo frame. 

    In my experience, 9000 (9014 is what I have set, per Intel driver in my Windows box, same card in my Unraid server, so I set it identically) MTU is necessary for 10Gbe to see full bandwidth. Though, IIRC, I saw much faster than 1Gbe with 1500 MTU set.

     

    edit: For reference, I have same 10GBe switch as you as well, and Unraid server is on DAC cable to switch. Windows box is over copper CAT6A in the walls, close to 20M+ in length, and I get at or near theoretical max for 10Gbe both directions. NICs are Intel X540 T2's in both machines.

  5. 10 hours ago, dauntouch said:

    I "fixed" it by downgrading to 6.8.3

    I was close to doing this, but I figured I'd tough it out and give 6.9.x a shot. So far, the br2 network for my containers I want to have their own IP, has been working well. No call traces yet and certainly no kernel panics. Of course, it is barely over a week out since I made that change. If I can say this a month out, then I will consider it good to go.

  6. On 5/29/2021 at 12:06 AM, dauntouch said:

    Hey did you figure it out by any chance? I'm having the exact same issue, same cpu + mobo with a kernel panick. Tried disabling c states and the idle setting, it worked for a couple weeks then started crashing again.


     

     

    14 hours ago, John_M said:

     

    Don't do both. Re-enable the Global C States. See above. The kernel panic is not caused by the CPU failing to wake up from the C6 state. If that happens the sever just freezes, but the Power Supply Idle Control setting fixes that.  As mentioned above (twice), the panic is likely related to docker networking.



    Yeah, I reverted the change of the C-states back to default. Only thing I have set now is the power supply idle control.

    So far, what has me "fixed" is I've move all of my docker containers that needed a custom network (static IP) over to a separate NIC (br2). I also disabled vlans in Unraid as well, since I wasn't using them anymore. No kernel panics or anything, yet anyway. It's been over a week now. Fingers are crossed!

  7. 7 hours ago, JorgeB said:

     

    thanks for the Reply JorgeB. I'm going to give the second NIC assignment a shot. I had been trying to do all of this through a single 10Gbe connection. I've got several 1Gbe ports open on my main switch, so its not a huge deal to just assign the containers to a second NIC. With any luck, this will solve it.

  8. I looked over it, I'm no expert, but you do have quite a few errors and warnings. Not sure what is critical or would cause hangups/crashes.

     

    Anyway, I took your CSV file, sorted it in descending orde r by dat e and time stamp, then exported as tab delimited txt file. Maybe this will help others to interpret it better.

    All_2021-4-28-10_7_11_tab delimited.zip

  9. On 4/21/2021 at 11:55 AM, John_M said:

     

    I usually find it in the AMD CBS section of the menu but wherever you go to find the Global C-states setting, you might find it there too.

     

    That's exactly where it was. I enabled the C-states option, and then set the idle current to typical instead of low. 

    I've also created a Docker specific vlan, and moved all of the br0 over to br0.x so hopefully that will keep my call traces and kernel panics at bay. I will update if anything changes. So far so good, but it has only been about 18 hours or so. Longest it has gone in the past was 10ish days. So if I can hit 2 weeks+ I'll consider it a win.

    • Like 1
  10. 8 hours ago, John_M said:

     

    There are a lot of posts about macvlan call-traces related to docker containers with custom IP addresses, such as this one:

     

     

     

    Don't do that. The only C-state that has ever been a problem with Ryzen processors is C6 and that only really affected the 1000-series. That said, it does no harm to find the BIOS setting that refers to Power Supply Idle Control and set it to Typical Current Idle instead of the default Low Current Idle.

     

    Awesome, thanks for letting me know. I will revert the global C-state setting, and dig around and see if I can find the idle current setting.

  11. Hello all,

     

    I've been fighting a few issues since I "upgraded" to a Ryzen based system from an old dual Xeon Dell T410.

    New build is a Ryzen 3600, on a B450 chipset. It was getting unresponsive after a few days, and that seems to have gone away after disabling global C-states in the BIOS. The latest issue is I get a kernel panic after a week or so of uptime. I have syslog enabled, and was able to get it just before the panic. I also got a pic of the screen before I rebooted that tells the rest of the story after the syslog dropped. Looks like network related, maybe mac-vlan? Definitely a call trace happens, but not sure what can be done about it.

     

    thanks for any help in advance!

     

    Ross

     

    IMG_20210420_170250.jpg

    syslog 4-20-21.txt

  12. I had a very similar situation this morning. I pulled the plug today for replacing a "bad" flash disk, 32GB Usb 2.0 SanDisk Cruzer Fit. I replaced an old/cheap drive back in June of last year, that was actually bad. So I had to go through support to get reinstated. So not as hassle free, but they were pretty quick to respond and get me fixed up!

     

    Anyway, I was getting read errors:
     image.thumb.png.8c70298e016c8fc375467f7ee9aca9d9.png

     

    I thought for sure the flash drive was dead, because when I plugged it into my Windows machine to yank the config file at least, but it hosed it up proper. Windows explorer crashed, I couldn't get into disk management, nothing I tried would allow me to see/access the drive. I called it, and went ahead with replacing it with a backup Cruzer I have still NIB. Then a while later, I got curious, and fiddled with it more. Obviously the flash drive had a chance to cool down and such. But its been solid read/write/verify testing for nearly 5 hours now without a single error. So, looks like I may have prematurely replaced it, but not sure why it hosed up so bad to begin with.

     

    image.thumb.png.af77e04920b8a8466a45992ba1a05ceb.png

     

    Server is going good now, no more errors. Just thought I'd tell my story as well, just in case someone else has similar errors pop up. It may not be the flash drive.

     

    -Ross

  13. I just installed the plugin for the first time. It will not let me sign in, I just get the perpetual spectra wave of death, regardless of browser or pop-up blocker settings. If it is down for others, that may explain it. I still have access to my server, so no big deal.

  14. 13 minutes ago, JorgeB said:

    Since the array was kept in use old parity is probably no longer in sync, but disk6 looks fine, so I would recommend replacing cables on that disk then do a new config and try to re-sync the new parity again, if you think old parity is still valid do it in maintenance mode so there's a chance to use it if it errors out again.

    10-4. I'll give that a shot and report back. Disk6 in coincidentally right next to the slot where I replaced the parity. So, there is a decent chance I disturbed the cable. Its almost done with a SMART extended test. If it passes that no issues, then I'll have higher confidence in it. Thanks JorgeB. 

  15. Update: I read about the "New Config" utility, but makes me nervous. I want to make sure I'm going to proceed correctly. I assume I replace "Failed/untrusted" drive with new pre-cleared drive, and using new config tool, assign it in the same position in the array. Start the array, and let it rebuild? But what about parity? I don't know how valid the parity is on the original 10tb parity disk. The array has been started and online without it assigned in the parity slot, so undoubtedly it's not synced. If I've lost the data on that 4tb drive, its not that big of a deal as anything critical is backed up off-site. But I don't want to nuke the array on accident and lose all of my media, as that would take forever to replace.

     

    I don't want to do anything until an expert chimes in, I've used Unraid for a couple of years now, but I've never dealt with this particular situation before and want to tread lightly.

×
×
  • Create New...