Jump to content

Mizerka

Members
  • Content Count

    29
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Mizerka

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hmm, scrap that, so I played around with it more. I Ruled out local and networking, all of which looked as expected. The issue is isolated to the vpn tunnel, I say that because I've also tried another brand new container, same results, brand new qbit container, same results. is what traffic looks like, with spikes being when I briefly turned vpn off for testing, where you can clearly see a spike to expected 13-15mib/s So, playing around with ovpn files, looks like it's not liking tcp, after changing nordvpn connection profile to udp, it instantly kicked back into proper speeds saturating entire wan link at 13mib down. Both can be replicated on delugevpn and qbittorrentvpn containers, with default config. Change itself must've been over a week ago or was introduced in unraid 6.8 as I haven't noticed it prior.
  2. Hey, thanks for the work on container; lately I seem to be really struggling with down speeds, can't seem to get anything more than 3MB/s down and 1MB/s up, must've been happening for around a week or two (I auto update containers so can't tell exactly). Previously I'd easily saturate the wan link (130mbps and 40mpbs up). There weren't any changes or anything that'd affect this? I did upgrade to 3.8 around the same time as well if that changes anything. Using nordvpn UK p2p tcp vpn (same server and ovpn file, but tried others as well). Thanks
  3. I love unraid for the true jbod experience, having moved from freenas, the ui and features by well worth the pricetag. I'd love to see proper ssd, m2 and nvme integration, including full flash arrays, or at least improve compatibility of them. Implementing a number of community addons and plugins would be nice as well.
  4. Thanks for flagging this, wasn't aware of it.
  5. oh, you're right, i missed that; nobody 13716 8.7 92.7 92190640 91863428 ? Sl 06:34 66:18 | | \_ /usr/bin/python -u /app/bazarr/bazarr/main.py --no-update --config /config okay, killing it for now then, I guess it's some memory leak, never seen it use that much/ Thanks
  6. Guess who's back, back again Array's dead, dead again. I've isolated one of the cores after forced reboot, so now at least webgui is usable (I guess isolation to everything but unraid os? okay), despite every other core sitting at 100%. Dockers are mostly dead due to lack of cpu time, but sometimes respond back with a webpage or output. shares are working almost normally as well. nothing useful in logs again. After removing plugins one by one, array returned to normal after killing ipmi or temparature sensor plugins. so that's interesting that it'd brick unraid out of nowhere... oh well, we'll see tomorrow.
  7. sure, well I give up then, good luck. only other thing in terms of config is you have disk shares force enabled, you're better of using user shares or leaving it on auto default. and mounting disk outside of array if that's what you need.
  8. those dns servers are a bit weird, first is likely your router, but other 2 are public and weird, I'd change to local router only probably, this is given out by your dhcp, i.e. router, again, strange. one of them points to some random location in Romania. and yeah ipv6 enabled so it picked up fe80:: should disable dhcp for something like unraid, it'll just cause you issues one day. Comparing my config to yours, there's nothing wrong unraid side and it doesn't report issues either. Make sure you have filesharing and discovery completely enabled e; It will be windows 100%, unraid will use at least smb2 by default, so that's fine
  9. that looks retro anyways, tried applying those changes? since you can ping it and it echo's back then that's good enough, from here on, it'll be a layer 4 issue onwards, i'd still wager it's a microsoft service issue, got any other machines on network that can access this share btw? also looking at logs briefly, you have nameserver set to 193.231.252.1, typo or is that some strange public resolver you use? doesn't actually respond on 53 by the looks.
  10. I'm pretty sure that's the default behaviour when making changes to a share that's already deployed. it kills off connections and relies on client to reestablish it. looking at my logs; it's specifically ran when changing smb security settings; below changing public to secure Dec 18 23:00:18 NekoUnRaid root: Starting Samba: /usr/sbin/nmbd -D Dec 18 23:00:18 NekoUnRaid root: /usr/sbin/smbd -D Dec 18 23:00:18 NekoUnRaid root: /usr/sbin/winbindd -D Dec 18 23:00:18 NekoUnRaid emhttpd: shcmd (509): smbcontrol smbd close-share 'ssd'
  11. hmm those look fine, and you don't need dhcpv6 for ipv6 devices to work, apipa v4 169.254 is a pain to use, but ipv6 fixes that and can manage itself on networks without dns quite painlessly, win10 prefers it by default. anyway, doubt it'll be it, but you can try disabling ipv6 on your client interface to force it on v4, but network discovery worked fine over v6 by the looks. win+r NCPA.CPL default eth int, and untick ipv6. also you don't have any firewalls/av in place that might prevent it? try disabling win firewall (temporarily if you actually use it) e; also make sure you have enabled discovery and filesharing on network/s (just add to all for now), found in network and sharing center.
  12. can you ping it? cmd ping x.x.x.x try hostname as well; ping hostname.local
  13. access it by ip and to root and not any shares, i.e. \\192.168.0.200\ if you get prompted for creds, use your unraid user/root. if share is public, try and access it.
  14. tried changing the unraid management port to something else? cd /boot/config/ nano go add -p to default below; /usr/local/sbin/emhttp -p 8008 & change 8008 to whatever unique on your host
  15. Hey, bit of a strange one, been on and off upgrading and working on unraid (in middle of encrypting entire array), over last week it crashed twice on me, I say crashed, but it's actually sitting at 80-100% cpu usage across all cores which it never does (24 cores, only running few dockers) and even more interestingly it's running 100% ram. Yesterday, I tried a few things like killing docker, force restarting, shutting downa array etc, but nothing worked, webui was somewhat usable and could console on to it as well. output of top: top - 19:23:29 up 22:10, 1 user, load average: 154.26, 151.66, 147.51 Tasks: 1062 total, 3 running, 1059 sleeping, 0 stopped, 0 zombie %Cpu(s): 19.1 us, 4.9 sy, 0.1 ni, 7.8 id, 67.4 wa, 0.0 hi, 0.8 si, 0.0 st MiB Mem : 96714.1 total, 520.3 free, 94856.5 used, 1337.3 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 223.3 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1809 472 20 0 153908 20228 0 D 501.0 0.0 301:48.74 grafana-server 1835 root 39 19 0 0 0 S 16.5 0.0 17:38.78 kipmi0 25784 root 0 -20 0 0 0 D 10.2 0.0 42:59.54 loop2 914 root 20 0 0 0 0 S 6.6 0.0 17:07.15 kswapd0 11948 root 20 0 3424 2308 1680 R 5.0 0.0 0:00.15 lsof 5922 root 20 0 5413656 61964 0 S 4.6 0.1 253:53.78 influxd 6737 root 20 0 175536 18744 0 S 3.0 0.0 283:15.09 telegraf 915 root 20 0 0 0 0 S 2.0 0.0 8:00.16 kswapd1 12001 root 20 0 0 0 0 I 2.0 0.0 1:44.95 kworker/u50:0-btrfs-endio 17414 root 20 0 0 0 0 I 1.7 0.0 0:10.87 kworker/u49:0-btrfs-endio 18371 root 20 0 0 0 0 I 1.7 0.0 0:36.69 kworker/u49:3-btrfs-endio 25464 root 20 0 0 0 0 I 1.7 0.0 0:06.19 kworker/u49:5-btrfs-endio 9124 nobody 20 0 150628 11848 6372 S 1.3 0.0 4:15.60 nginx 9652 root 20 0 0 0 0 I 1.3 0.0 0:33.14 kworker/u50:7-btrfs-endio 16159 root 20 0 0 0 0 I 1.3 0.0 0:27.58 kworker/u50:6-btrfs-endio 23079 root 20 0 0 0 0 R 1.3 0.0 0:39.64 kworker/u49:4+btrfs-endio 10860 root 20 0 0 0 0 I 1.0 0.0 0:03.74 kworker/u49:11-btrfs-endio 10955 root 20 0 0 0 0 I 1.0 0.0 0:02.17 kworker/u49:13-btrfs-endio 11390 root 20 0 9776 4396 2552 R 1.0 0.0 0:00.10 top 2533 root 20 0 0 0 0 I 0.7 0.0 0:20.58 kworker/u49:7-btrfs-endio 2621 nobody 20 0 7217200 113880 0 S 0.7 0.1 6:06.13 jackett 7894 root 22 2 113580 24708 19188 S 0.7 0.0 8:01.97 php 9093 root 20 0 283668 3948 3016 S 0.7 0.0 8:15.71 emhttpd 25333 root 20 0 1927136 120228 976 S 0.7 0.1 171:10.80 shfs 31883 nobody 20 0 4400376 491120 4 S 0.7 0.5 15:38.17 Plex Media Serv 147 root 20 0 0 0 0 I 0.3 0.0 0:29.49 kworker/14:1-events 936 root 20 0 113748 13228 7672 S 0.3 0.0 0:00.22 php-fpm 1662 root 20 0 36104 988 0 D 0.3 0.0 0:07.78 openvpn 1737 root 0 -20 0 0 0 I 0.3 0.0 0:02.95 kworker/12:1H-kblockd 2530 root 20 0 8677212 232344 0 S 0.3 0.2 1:09.19 java 2794 nobody 20 0 197352 51372 0 D 0.3 0.1 0:44.64 python 6629 root 20 0 0 0 0 I 0.3 0.0 0:27.33 kworker/8:2-events 7607 root 20 0 3656 232 196 D 0.3 0.0 0:00.01 bash 14099 root 20 0 33648 14988 52 D 0.3 0.0 0:13.68 supervisord 18350 root 20 0 0 0 0 I 0.3 0.0 0:42.51 kworker/u50:3-btrfs-endio 21018 nobody 20 0 3453756 581936 4 S 0.3 0.6 51:39.60 mono 21121 nobody 20 0 2477100 532884 4 S 0.3 0.5 8:34.30 mono 22859 root 20 0 0 0 0 I 0.3 0.0 0:12.78 kworker/u50:5-btrfs-endio-meta 25853 root 20 0 2649020 45064 19816 S 0.3 0.0 81:46.26 containerd 31808 root 20 0 76984 664 412 D 0.3 0.0 0:01.23 php7.0 32288 nobody 20 0 429092 1852 0 S 0.3 0.0 0:21.01 Plex Tuner Serv 32302 root 20 0 0 0 0 I 0.3 0.0 0:30.08 kworker/19:1-xfs-buf/md9 1 root 20 0 2460 1700 1596 S 0.0 0.0 0:13.40 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.07 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-kblockd 9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 10 root 20 0 0 0 0 S 0.0 0.0 0:27.84 ksoftirqd/0 11 root 20 0 0 0 0 I 0.0 0.0 2:03.96 rcu_sched 12 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh Clearly grafana is having some fun there managing 500% cpu (goes up to like 900% sometimes) But even after trying to kill it, it doesn't work, by that I mean it refuses to die, even when force killing entire docker. The only thing I can think of recently is that I've started to run some youtube-dl scripts as part of recent yt changes, to archive some channels, but that's hardly doing anything destructive imo, it does write some temp files then remuxes parts into single mkv's etc but that's about it, all done locally by another client as well. Attached diagnostics, unraid running atm, but I'll probably kill it before end of the night. Any help is appreciated. nekounraid-diagnostics-20191218-1910.zip