thestraycat

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by thestraycat

  1. Hi guys, just looking for confirmation. I'm running 1tb EVO 850 as my cache/docker/VM drive formatted with BTRFS I'm running it off of my Dell H200 HBA on recent IT firmware (As this gives me SATA3 connectivity) I'm running Dynamix TRIM plugin. I'm getting the following error via email notifications: fstrim: /mnt/cache: the discard operation is not supported Is this because i'm running BTRFS or because it's connected via HBA on IT Firmware? Lastly, should i be running on a daily or weekly schedule? Is it a big deal if i can't issue the FSTRIM command to my SSD?
  2. Hmmm anyone else getting the problem when they click on "customization" all of the formatting breaks and the container cant recover even after a reboot? Tried it twice now from blowing away the container and deleting the persistant storage and reinstalling... WEIRD.
  3. Anyone had any luck running it with apache reverse proxy?
  4. Would love to have a convenient self-contained MediaWiki container if anyone fancies having a go... I run this particular container on my standalone docker host without issue... but have no idea of how to get it over to Unraid. Think it probably just needs redirecting to install in /appdata https://hub.docker.com/r/appcontainers/mediawiki/~/dockerfile/
  5. Ah right thanks for the confirmation - Anyone else get it? Or am i on my own with it...
  6. Just noticed this error in the logs when i turn off the container? Is it to be expected? Nov 6 17:10:13 MediaServer kernel: transmission-da[26536]: segfault at 48 ip 0000556061c5ee19 sp 00002ae9ef40b9e8 error 4 in transmission-daemon[556061c2e000+72000]
  7. Upgraded to Unraid 6.2.4 and it seems to be running ok at the moment... No IPMI still though. Have a funny feeling i'll have to do a full powerdown with router off to reset the BMC chip properly... UPDATE: Yup. Power off of everything bought it back up.
  8. Ah right.. But why the renaming comment "eth0: renamed from beth8ebe344" Working back from all my changes over the last few days and noticed the following RED error in the Linuxserver.io transmission log when stopping it... anyone else have it? Nov 6 17:10:13 MediaServer kernel: transmission-da[26536]: segfault at 48 ip 0000556061c5ee19 sp 00002ae9ef40b9e8 error 4 in transmission-daemon[556061c2e000+72000]
  9. Very strange. Then i have no idea why it was blocked, tried it multiple times after reboots, hard off the server, punted the switching and router just connected IPMI, then bought the server online. Nothing. Going to try it again in a minute and see if it miraciously seems to work now. Any ideas? Just noticed (whilst i have the server online and working)... When i start a docker in the unraid logs i see: Nov 6 16:50:48 MediaServer kernel: device vethb876690 entered promiscuous mode Nov 6 16:50:48 MediaServer kernel: eth0: renamed from vethd0005b0 Nov 6 16:50:48 MediaServer kernel: docker0: port 1(vethb876690) entered forwarding state Nov 6 16:50:48 MediaServer kernel: docker0: port 1(vethb876690) entered forwarding state Nov 6 16:51:03 MediaServer kernel: docker0: port 1(vethb876690) entered forwarding state Nov 6 16:51:07 MediaServer kernel: vethd0005b0: renamed from eth0 Nov 6 16:51:07 MediaServer kernel: docker0: port 1(vethb876690) entered disabled state Nov 6 16:51:07 MediaServer kernel: docker0: port 1(vethb876690) entered disabled state Nov 6 16:51:07 MediaServer kernel: device vethb876690 left promiscuous mode Nov 6 16:51:07 MediaServer kernel: docker0: port 1(vethb876690) entered disabled state Is that normal behaviour??
  10. Yup. But the weird thing was that whilst the server was playing up the IPMI was knocked out as well i'm scared to plug it back in tbh now as i removed it whilst troubleshooting - I'm wondering whether the combination of having 2 onboard NIC and 1 ipmi port is a confusing to unraid at times like unraid version upgrades etc. There's no reason tin hell that unraid should be messing with the IPMI port - It felt like it was confused over MAC addresses and was blocking EVERYTHING to the server whilst scanning or trying to resolve. If anyone else has this board i'd love to know your experiences...
  11. Hmm ok so found a VGA cable, went into GUI mode, noticed that the ethernet adaptors had renamed themselves (eth0 and eth5) refreshed eth0 config hopeing it would resave over the file is saves too... eth5 i gave a static ip so it had one in case i needed to hop over to it. Noticed that my mac addresses for eth0 and eth5 had swapped over in unraid ... switched them back over... rebooted back up for now... Super weird...will keep my eye on it and post in 24hrs. I'd expect an issue in that time.
  12. Tried the 2nd nic but no joy. Didn't auto negotiate the ip. No VGA cable to work locally on the box until tomorrow but so strange after everything was working any big firmware changes in 6.2.3?
  13. Thanks for taking the time to look John.. Appreciate that. Your right it does have 2 NIC's im going to try the other nic now.. But as there both the same NIC i'm assuming the same result tbh.
  14. Thanks John - Any ideas what may cause it? I would upload the diag but it's just hung and is unresponsive. i'm assuming when i hard off it it will have lost all of it's logs... Is there an easy way to forward the log to a remote pc on the network anyone?
  15. Any chance i can get a screenshot of your container config?
  16. Been monitoring my logs all day and there full of the following: active_time=60 secs Nov 4 12:03:28 MediaServer ntpd[1832]: 193.150.34.2 local addr 192.168.1.125 -> <null> Nov 4 12:03:31 MediaServer kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Nov 4 12:03:32 MediaServer ntpd[1832]: Listen normally on 236 eth0 192.168.1.125:123 Nov 4 12:03:32 MediaServer ntpd[1832]: 193.150.34.2 local addr 172.27.239.1 -> 192.168.1.125 Nov 4 12:03:32 MediaServer ntpd[1832]: new interface(s) found: waking up resolver Nov 4 12:11:51 MediaServer kernel: e1000e 0000:00:19.0 eth0: Reset adapter unexpectedly Nov 4 12:11:52 MediaServer ntpd[1832]: Deleting interface #236 eth0, 192.168.1.125#123, interface stats: received=4, sent=7, dropped=0, active_time=500 secs Nov 4 12:11:52 MediaServer ntpd[1832]: 193.150.34.2 local addr 192.168.1.125 -> <null> Nov 4 12:11:55 MediaServer kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Nov 4 12:11:56 MediaServer ntpd[1832]: Listen normally on 237 eth0 192.168.1.125:123 Nov 4 12:11:56 MediaServer ntpd[1832]: 193.150.34.2 local addr 172.27.239.1 -> 192.168.1.125 Nov 4 12:11:56 MediaServer ntpd[1832]: new interface(s) found: waking up resolver Nov 4 12:28:50 MediaServer kernel: e1000e 0000:00:19.0 eth0: Reset adapter unexpectedly Nov 4 12:28:51 MediaServer ntpd[1832]: Deleting interface #237 eth0, 192.168.1.125#123, interface stats: received=9, sent=11, dropped=0, active_time=1015 secs Nov 4 12:28:51 MediaServer ntpd[1832]: 193.150.34.2 local addr 192.168.1.125 -> <null> Nov 4 12:28:54 MediaServer kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Nov 4 12:28:55 MediaServer ntpd[1832]: Listen normally on 238 eth0 192.168.1.125:123 Nov 4 12:28:55 MediaServer ntpd[1832]: 193.150.34.2 local addr 172.27.239.1 -> 192.168.1.125 Nov 4 12:28:55 MediaServer ntpd[1832]: new interface(s) found: waking up resolver Nov 4 12:29:49 MediaServer kernel: e1000e 0000:00:19.0 eth0: Reset adapter unexpectedly Nov 4 12:29:50 MediaServer ntpd[1832]: Deleting interface #238 eth0, 192.168.1.125#123, interface stats: received=0, sent=1, dropped=0, active_time=55 secs Nov 4 12:29:50 MediaServer ntpd[1832]: 193.150.34.2 local addr 192.168.1.125 -> <null> Nov 4 12:29:53 MediaServer kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Nov 4 12:29:54 MediaServer ntpd[1832]: Listen normally on 239 eth0 192.168.1.125:123 Nov 4 12:29:54 MediaServer ntpd[1832]: 193.150.34.2 local addr 172.27.239.1 -> 192.168.1.125 Nov 4 12:29:54 MediaServer ntpd[1832]: new interface(s) found: waking up resolver Nov 4 12:32:23 MediaServer kernel: e1000e 0000:00:19.0 eth0: Reset adapter unexpectedly Nov 4 12:32:24 MediaServer ntpd[1832]: Deleting interface #239 eth0, 192.168.1.125#123, interface stats: received=0, sent=3, dropped=0, active_time=150 secs Nov 4 12:32:24 MediaServer ntpd[1832]: 193.150.34.2 local addr 192.168.1.125 -> <null> Nov 4 12:32:27 MediaServer kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Nov 4 12:32:28 MediaServer ntpd[1832]: Listen normally on 240 eth0 192.168.1.125:123 Nov 4 12:32:28 MediaServer ntpd[1832]: 193.150.34.2 local addr 172.27.239.1 -> 192.168.1.125 Nov 4 12:32:28 MediaServer ntpd[1832]: new interface(s) found: waking up resolver Am i right in thinking that the NTP daemon is crashing my nic's and which are then resetting themselves and reregistering as a new NIC in unraid every time?? I've turned off NTP for now.... To see if it calms down and stabilises it came on from installing a few dockers within 24hrs. Now the server needs a daily reboot and takes 15-20 mins to come up. Can post logs/diag etc but thought i'd just test the water first in case anyone had seen the issue? Anyone seen this behaviour?
  17. Anyone having a problem making changes to transmission that save for example adding a blocklist URL or changing values? Even when i edit the settings.json file with the container off and then turn the container on it still overwrites the settings which i believe is standard practice for the transmission daemon... But should i be turning off the transmission service with the container on, editting the settings.json file in the /defaults folder (in the container) and then enabling the service? I thought with there being a copy of settings.json kept in the appdata/transmission folder (outside the container) i should have been able to modify there?
  18. I did my tests on a completely different docker host with no previous install of SABnzBD... Everythings fine until i tick "Download all par2 files" and restart. Then i lose the greyed out multicore tick. Even when i untick "Download all par2 files" and restart the greyed out multicore tick dosn't return, have a sneaking suspicion that making ammendments to any of the related PAR options would do the same...
  19. @SparklyBalls Just did the exact same test as yourself following your vid letter for letter... got the same result as you. However, when i turn on "download all par2 files" under 'settings -> switches' I experience the issue i reported, i lose the greyed out tick. Can you see if it's the same your end? EDIT: Strangely when i remove the tick from "Download all par2 files" the tick under "Enable Multicore PAR2" dosn't come back either!
  20. Sorry I mean the parameter in my screenshot. I have par2 and ionice parameters set in sabnzbd but sabnzbd complains of problems with my par2 parameters (which should be valid if it is indeed using multicore as these parameters force multicore) I was under the impression that if sab found a compatible multicore par2 executable it would have put a tick in the greyed out box for multicore as well? If anyone is running the latest pull of this container can they see how there's shows up under settings -> switches?
  21. That's great do you know if the version of par2 you guys have used in this container can use the switch in my pic? Sent from my Nexus 5 using Tapatalk
  22. Hi guys, Love your containers, and see that you've mentioned previously in the thread that PAR2 Multicore should be enabled as part of this container however when i browse the default skin and check my PAR2 settings there is NO tick in the greyed out enable multicore par2... Is this to be expected? And if it is enabled and this is related to PAR2 not being recognized as there are there any default PAR2 parameter settings that are being used straight out the box? i'm getting PAR2 switch error using the following:
  23. @johnnie.black - Yeah i know I was just wondering when multichannel SMB support would be stable and supported within unraid really.
  24. Good to know about the corruption i take it that it should be finalized for the official Unraid 6.2 release? I was debating at looking at getting a 24 port switch with 2 SFP+ 10gb ports for Unraid and my ESXi Lab... My thinking was simplify the network config by just running a 10gb card in Unraid and in ESXi and leave the rest of the switch gigabit. What do you think??
  25. It's more for multiple clients accessing unrai on different shares and disks tbh. Any caveats to trying to enable multichannel smb? Do clients that dont support it fail back to a previous version of samba for connection?