Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. It is a data drive. I am running 6.6.7. Attached a screenshot of the drives. Disk 3 is the drive that can't be fixed, and disk 4 is the one they are attempting to clone.
  3. (6.7.0-rc5) I have been investigating an issue I have recently started to have. but I'm getting nowhere with it. I'll do my best to describe it with examples. (I'm pretty sure I broke something somehow as it was working previously) for the sake of showing off the issue, I have two firewall port settings set up for airsonic settingA UnraidServer 80 4040 settingB UnraidServer 4040 4040 Basically shifting incoming port 80 to 4040 on unraid box and the same with 4040. if I access my machine remotely using: www.somewebaddress.com:4040/airsonic/login This works. www.somewebaddress.com/airsonic/login This does not work. However, if I instead change the firewall to point at unraid gui settingA UnraidServer 80 80 then www.somewebaddress.com will show me my unraid Gui. So, I think somehow I've broken unraid's ability to pass on requests to containers. (Routing? I'm not sure on the terminology!) This isn't specifically related to airsonic, it is affecting all of my containers. Any help anyone could give would be gratefully received. tower-diagnostics-20190326-1203.zip
  4. I use an SB8200 with no issues on a gigabit down/40mbps up connection after my previous arris router died after 2 years of service. Got it on a really good deal plus gift cards (they're more expensive now than when I bout a year ago.) It is deceptive though, 10gbps capable download speeds but only two gigabit ports.... and currently xfinity only allows one in use. As far as latency spikes, if I saturate my connection I can get some buffer bloat/latency, but on average it does not occur with my household demands. Additionally, even when downloading files, most remote servers aren't providing full bandwidth of files I'm pulling down. I'm running this hardware in conjunction with virtualized PFSense on my server. Average latency 7-8ms to my ISP gateway. 9-10ms to 8.8.8.8 or 1.1.1.1. Long story short: it works fine for me. fast.com speed test screen shot below, ran with 2 other people currently streaming. You could probably get away with a lower model just fine but when a deal comes along, sometimes you gotta go big.
  5. eric.frederich

    [Support] Linuxserver.io - Letsencrypt (Nginx)

    Yeah. Turns out you don't need to rely on anyone else's infrastructure. If you have any machine on your home network which is accessible via SSH just do something like thid ssh -L 9000:10.10.1.99:80 home-computer Where 10.10.1.99 is the local IP of your Unraid server and home-computer is something you have set up in ~/.ssh/config to connect to your home machine. Then I can just point my browser at http://localhost:9000 and everything seems to work.
  6. In the future, you can get more help by posting your diagnostics.
  7. Today
  8. Thanks, will be doing that tmrw. I got the M1015.zip for my LSI SAS9220-8i card. Sent from my iPhone using Tapatalk Pro
  9. miccos

    [Support] Linuxserver.io - Unifi-Controller

    Hi, just looking for some advice. Sorry but this isn’t necessarily docker related but related to UniFi and unRaid so thought someone here might be able to shed some light. I added a USG into my setup and all is working though everything within unRaid can no longer resolve websites; 2019-03-26 19:44:21,890::INFO::[rss:309] Failed to retrieve RSS from https://dognzb.cr/rss.cfm?r=e#s#s#ssss#s#s=9000: <urlopen error [Errno -3] Temporary failure in name resolution> Gmail smtp no longer works for notifications sab server test; [Errno 99] Address not available Check for internet or DNS problems thanks
  10. binhex

    [Support] binhex - DelugeVPN

    this looks like a badly configured/corrupt deluge configuration, it looks like a value for encryption is set to a negative decimal value, and thus its blowing up, to fix it you can either waqde through the config file and try and find the offending value or simply reset the config file by renaming the file /config/core.conf (i think thats the name) to /config/core.conf.old and then restart the container, this will get you back up and running but you will of course need to reconfigure completed, incomplete etc.
  11. wheelhouse20

    Cant add share

    Done. tower-diagnostics-20190326-0206.zip
  12. Unable to remove partition. unRAID: v6.6.6 UD: v2019.03.22 - Enabled Destructive Mode - Unmounted disk - Clicked the '+' icon next to drive icon - Clicked the red 'x' - Typed 'Yes' - Clicked 'Remove' - I can see the red removing spinner pop up next to the partition for a second or two. The partition is not removed. I can remount it and access all the files. I've tried a few times and I know I've done this before. edit: Just tried reinstalling UD, same result edit: After rebooting, the drive is now showing up as unformatted. Clicked Format, after a few seconds the same partition shows up again but this time it's empty which is what I wanted so I guess I'm good to go.
  13. DaLeberkasPepi

    [6.7.0-rc5] extreme high cpu usage in dashboard but not top/htop

    I've read that post before creating mine. But for me it seems like an different issue because it's not only a graphical bug but a misbehavior that effects the web ui and Docker containers etc. What I found was that after updating a container the cpu usage was normal again but that container had a max cpu usage of 5% so this couldn't be really the culprit. I've read that squid had a same behavior which he fixed by changing a Docker from lio to binhex (I belive sonarr). The thing is I don't even have that sonarr container installed. The weirdest thing is that I can't catch that cpu usage anywhere but it certainly effects the server performance anyway...
  14. Squid

    Cant add share

    Post your diagnostics Sent via telekinesis
  15. Hugo

    [Support] Linuxserver.io - Ombi

    I'm having the exact problem on my machine.
  16. phbigred

    [6.7.0-rc5] extreme high cpu usage in dashboard but not top/htop

    Sounds like a similar issue to this post.
  17. hitman2158

    Building Storage Server + Media Player

    Depends on you purpose. My PlexMediaServer is running via Docker as well as other apps like Krusader (filemanager via webgui). Docker and VM´s are on my cache SSD. The prices for SATA SSD´s are decreasing, here some examples (incl. VAT) from the cheapest ones in Germany: 120GB for approx. 20 USD 240GB for 30 USD 480GB for 57 USD 1TB for 110 USD 2TB for 260 USD So you can get a lot bang for the buck. And you have the full speed of your GBit-network while copying big data to the storage, for example copy a few movies like you mentioned in a single slide (100, 200, 300 or more GB). I would spent a few bucks for a cache SSD, especially with a view to expansion in the future. See my sig, 480GB Cache SSD on main rig, 120GB on backup.
  18. binhex

    [Support] binhex - Jackett

    yes it is working fine (i use it), ive never use PIA socks5 so cant comment on that but it may well be your issue.
  19. Yes, strange it didn't open at home with 7zip, it does at work, regardless no point in raring the diags. There's a problem identifying the drive connect on this port: Mar 22 17:01:13 Tower kernel: ata13: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 22 17:01:13 Tower kernel: ata13.00: qc timeout (cmd 0xec) Since all 8 Intel SATA ports are free, OP should start by using those.
  20. Melandir

    Case or cage Recommendation

    I'm sorry I forgot to check availability in US, on this side of the pond there are still a few left around, I plan to grab one before they go away, no manufacturer is currently producing any case with lots of 5.25 slots, I was lucky to pickup a used Lian Li PC-A77 off Ebay even if it was not in the best conditions. I was inspired by this built and I plan to follow it for what is possible
  21. johnnie.black

    Docker Image corrupt?

    At some time in the past there were errors writing to both cache devices, these a re hardware errors: Mar 26 09:36:26 Media kernel: BTRFS info (device sdl1): bdev /dev/sdl1 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0 Mar 26 09:36:26 Media kernel: BTRFS info (device sdl1): bdev /dev/sdk1 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 See here for more info on what to do: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582
  22. wheelhouse20

    Cant add share

    when i click to go and add share the page goes blank. http://192.168.0.13/Shares/Share?name=
  23. andyjayh

    Docker Image corrupt?

    Ok, thanks for looking. I've restarted the server and uploaded a new diagnostics file. Learning curve but I now understand that what I was seeing in the Dashboard was that my Docker Log was filling up and I suspect this is causing part of the issue, don't know if this was able to take one of the cache drives offline and therefore caused the corruption I am now experiencing? I need to understand why my log file is now filling up so quickly when my server has in the past been up for months at a time without issue. UniFi Controller and Video are the new Dockers so I suspect one of these is writing large amounts of log file entries? media-diagnostics-20190326-0944.zip
  24. DrAg0n141

    Call Trace Error Spaming

    Hi, i have see that i have these Call Trace Errors since the last days. It is Spamming all 20 Seconds in the LOG. Someone knows what is this or i can do? My Hardware: Intel S2600CP2 E5-2620 V2 2 x 16GB DDR3 ECC RAM Mellanox-ConnectX-3 SFP+ Mar 26 10:42:43 Sven-NAS kernel: eth0: hw csum failure Mar 26 10:42:43 Sven-NAS kernel: CPU: 2 PID: 0 Comm: swapper/2 Tainted: P O 4.19.24-Unraid #1 Mar 26 10:42:43 Sven-NAS kernel: Hardware name: Intel Corporation S2600CP/S2600CP, BIOS SE5C600.86B.02.06.0007.082420181029 08/24/2018 Mar 26 10:42:43 Sven-NAS kernel: Call Trace: Mar 26 10:42:43 Sven-NAS kernel: <IRQ> Mar 26 10:42:43 Sven-NAS kernel: dump_stack+0x5d/0x79 Mar 26 10:42:43 Sven-NAS kernel: __skb_checksum_complete+0x5d/0xa7 Mar 26 10:42:43 Sven-NAS kernel: igmp_rcv+0x138/0x685 Mar 26 10:42:43 Sven-NAS kernel: ip_local_deliver_finish+0x101/0x1aa Mar 26 10:42:43 Sven-NAS kernel: ip_local_deliver+0xb9/0xd5 Mar 26 10:42:43 Sven-NAS kernel: ? ip_sublist_rcv_finish+0x53/0x53 Mar 26 10:42:43 Sven-NAS kernel: ip_rcv+0x9e/0xbc Mar 26 10:42:43 Sven-NAS kernel: ? ip_rcv_finish_core.isra.0+0x2e6/0x2e6 Mar 26 10:42:43 Sven-NAS kernel: __netif_receive_skb_one_core+0x4d/0x69 Mar 26 10:42:43 Sven-NAS kernel: netif_receive_skb_internal+0x9f/0xba Mar 26 10:42:43 Sven-NAS kernel: napi_gro_frags+0x153/0x18b Mar 26 10:42:43 Sven-NAS kernel: mlx4_en_process_rx_cq+0x7e7/0x950 [mlx4_en] Mar 26 10:42:43 Sven-NAS kernel: ? mlx4_cq_completion+0x1e/0x63 [mlx4_core] Mar 26 10:42:43 Sven-NAS kernel: ? mlx4_en_rx_irq+0x23/0x3e [mlx4_en] Mar 26 10:42:43 Sven-NAS kernel: ? mlx4_eq_int+0xb2a/0xb55 [mlx4_core] Mar 26 10:42:43 Sven-NAS kernel: mlx4_en_poll_rx_cq+0x66/0xc6 [mlx4_en] Mar 26 10:42:43 Sven-NAS kernel: net_rx_action+0x10b/0x274 Mar 26 10:42:43 Sven-NAS kernel: __do_softirq+0xce/0x1e2 Mar 26 10:42:43 Sven-NAS kernel: irq_exit+0x5e/0x9d Mar 26 10:42:43 Sven-NAS kernel: do_IRQ+0xa9/0xc7 Mar 26 10:42:43 Sven-NAS kernel: common_interrupt+0xf/0xf Mar 26 10:42:43 Sven-NAS kernel: </IRQ> Mar 26 10:42:43 Sven-NAS kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141 Mar 26 10:42:43 Sven-NAS kernel: Code: ff 45 84 ff 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 a4 4e be ff fb 66 0f 1f 44 00 00 <48> 2b 1c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cb Mar 26 10:42:43 Sven-NAS kernel: RSP: 0018:ffffc9000328bea0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdc Mar 26 10:42:43 Sven-NAS kernel: RAX: ffff88881daa0b00 RBX: 000038b4794f3c4a RCX: 000000000000001f Mar 26 10:42:43 Sven-NAS kernel: RDX: 000038b4794f3c4a RSI: 000000003d196a7c RDI: 0000000000000000 Mar 26 10:42:43 Sven-NAS kernel: RBP: ffff88881daab300 R08: 0000000000000002 R09: 00000000000203c0 Mar 26 10:42:43 Sven-NAS kernel: R10: 00000000005ed6e8 R11: 00026820c620f166 R12: 0000000000000004 Mar 26 10:42:43 Sven-NAS kernel: R13: 0000000000000004 R14: ffffffff81e58e58 R15: 0000000000000000 Mar 26 10:42:43 Sven-NAS kernel: do_idle+0x192/0x20e Mar 26 10:42:43 Sven-NAS kernel: cpu_startup_entry+0x6a/0x6c Mar 26 10:42:43 Sven-NAS kernel: start_secondary+0x197/0x1b2 Mar 26 10:42:43 Sven-NAS kernel: secondary_startup_64+0xa4/0xb0
  25. The .rar is fine with something like 7 zip. I couldn't see anything obvious apart from the lack of notification in syslog. Yet it appears in lspci. My particular card shows this as an example: Mar 23 12:06:16 Mars kernel: ata14.00: ATAPI: MARVELL VIRTUAL, , 1.09, max UDMA/66 Mar 23 12:06:18 Mars kernel: scsi 14:0:0:0: Processor Marvell Console 1.01 PQ: 0 ANSI: 5
  26. johnnie.black

    Docker Image corrupt?

    Syslog rotated so can't see the beginning of the problem but it does look like a cache device dropped offline, reboot and after a few minutes of array usage grab and post new diags.
  27. @IamSpartacus Check your other thread for a solution for your wifi issue. 😉
  1. Load more activity