dexn

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by dexn

  1. Could you update the container to 2.0.14 please?
  2. I have made a feature request for the above post about disabling option.. https://forums.lime-technology.com/index.php?/topic/57075-Disable-dockerupdate.php-check-on-array-start Sent from my 5080Q using Tapatalk
  3. This script causes the system to wait for a long time if there is no network, for example if the firewall is a VM on UnRAID. It would be nice to have an option to disable this in the docker settings. https://forums.lime-technology.com/index.php?/topic/56918-Booting/Starting-array-hangs-at-dockerupdate-w/o-network-(VM-Firewall) Sent from my 5080Q using Tapatalk
  4. I figured that was the case due to it being located in the docker manager plugin. I use the CA Application Auto update plugin to do updates at a scheduled time. It seems to be unconditional and runs on every array start. It would be nice to have a way to disable this check in the docker manager. For example, a checkbox to "Disable update check on first run" for people running their firewall on UnRAID. Sent from my 5080Q using Tapatalk
  5. They do but not by the UnRAID. I use the CA Docker Autostart Manager and I can't see it calling for the dockerupdate.php. Sent from my 5080Q using Tapatalk
  6. Please, I need help to figure this out. It is a ridiculously long wait just for the script to timeout so it can continue to boot and start the VM's. If I need to reboot or stop/start the array multiple times this becomes even more frustrating... I have 16 dockers and it takes 8 minutes to finish the dockerupdate.php script. So it seems it takes 30 seconds to timeout for each docker. Sent from my 5080Q using Tapatalk
  7. I am running my firewall as a VM with a NIC (internet) and WiFi card passthrough, and then my UnRaid is connected to my switch. This means my network is not up until the firewall VM has fully started and in turn, seems to cause the dockerupdate.php to hang for anywhere between 7 and 8 minutes. I was thinking it would be nice it could check for internet connection first and then fail gracefully if there wasn't one available. Apr 27 17:28:30 storage emhttp: shcmd (146148): /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php |& logger Apr 27 17:36:31 storage root: Updating templates... Updating info... Done. If anyone knows or has any ideas of how I can resolve this, it would be very much appreciated.
  8. Hi binhex, is there any way you could add support in the TeamSpeak docker for licensekey.dat file please?
  9. I can say that I am not seeing this problem... Uptime is over 3 days (was longer but had to remove a RAM stick due to failure) using a Marvell 8EE8001 GigaLAN...
  10. I am running the webserver for OwnCloud and need to configure lighttpd-mod_webdav & mod_dav_svn for it to work properly for the sync clients... Has anybody been able to successfully do this?
  11. I love your work and thank-you so much for your time and effort. I am wondering if there is a way to easily add and change the proxy settings, and in a future release would you consider adding those options in the unmenu? Quoted form transmission guide... proxy-authentication: String proxy-authentication-required: Boolean (default = 0) proxy-port: Number (default = 80) proxy-server: String proxy-server-enabled: Boolean (default = 0) proxy-type: Number (0 = HTTP, 1 = SOCKS4, 2 = SOCKS5, default = 0) proxy-username: String
  12. I ran the memtest for more than 48 hours with no errors. Rebooted system and the "bug" didn't reappear, so I'm not sure if this will arise further down the track or whether it was just playing up for that day... I have now also upgraded to 5.0-rc6-r8168-test2 which seems to be running smooth so far using a M1015 flash with LSI...
  13. Another Crash, Another Log... http://dcwebsupport.com/syslog2.txt noticed similarities between the crashes... Jul 24 11:38:22 Storage_Server kernel: shfs[5593]: segfault at ffffffff ip ffffffff sp b3fff2d8 error 14 (from first log) and Jul 24 18:06:18 Storage_Server kernel: shfs[4295]: segfault at 345ad3f4 ip b74c1251 sp b63d6f50 error 4 in libc-2.11.1.so[b744d000+15c000] (from second log) after these seems to be when it crashes... need help!!!
  14. Got it to crash "better" than before. Can't get any response from the system at all, not even from the machines keyboard... I typed this by hand so please excuse me if there is a typo. [<c102d0cf>] ? irq_enter+0x3c/0x3c <IRQ> [<c102cf8d>] ? irq_exit+0x32/0x53 [<c1015d7e>] ? smp_apic_tmer_interrupt+0x6c/0x7a [<c130f902>] ? apic_timer_interrupt+0x2a/0x30 [<c108007b>] ? wait_on_retry_sync_kiocb+0xe/0x41 [<c108fe8e>] ? d_alloc+0x74/0x14a [<c10879db>] ? d_alloc_and_lookup+0x1f/0x4f [<c1087eee>] ? do_lookup+0x19e/0x262 [<c10883a3>] ? link_path_walk+0x1d8/0x5e4 [<c1088b15>] ? path_lookupat+0x4c/0x4ba [<c1088f72>] ? path_lookupat+0x4a9/0x4ba [<c1089e39>] ? getname_flags+0x21/0xbe [<c1088f9f>] ? do_path_lookup+0x1c/0x4e [<c1089f14>] ? user_path_at_empty+0x3e/0x69 [<c10894fc>] ? user_path_at+0xd/0xf [<c1083739>] ? vfs_fstatat+0x51/0x78 [<c10837a4>] ? vfs_lstat+0x16/0x18 [<c10837ba>] ? sys_lstat64+0x14/0x28 [<c10815e0>] ? __fput+0x186/0x68f [<c10815fc>] ? fput+0x13/0x15 [<c107ece1>] ? flip_close+0x57/0x61 [<c107ed45>] ? sys_close+0x5a/0x88 [<c130f525>] ? syscall_call+0x7/0xb [<c1300000>] ? _cpu_down+0xc4/0x1bc It seems to happen when I am copying a lot of files but I have no confirmation to back this up. It is just a pattern I am noticing.
  15. I'm going to restart the server and see if i can get it to reinvoke the error with a shorter syslog (no parity check). When I get it to crash again, I will leave it crashed incase someone needs me to do something whilst it is in that state.
  16. Full syslog.txt http://dcwebsupport.com/syslog.txt
  17. Running 5.0RC5... Yes I have tried running without plugins and still have the same issue... I can get the full syslog.txt if you think it will help better than the extract... I have noticed that the Disk shares are still available, while the User shares are not...
  18. I keep getting this error and all shares are not available from network. This has only happened during copying large amounts of files to a share through the network. Jul 24 11:36:31 Storage_Server kernel: scsi_verify_blk_ioctl: 36 callbacks suppressed Jul 24 11:36:31 Storage_Server kernel: hdparm: sending ioctl 2285 to a partition! Jul 24 11:36:32 Storage_Server last message repeated 5 times Jul 24 11:36:32 Storage_Server kernel: smartctl: sending ioctl 2285 to a partition! Jul 24 11:36:32 Storage_Server last message repeated 3 times Jul 24 11:37:33 Storage_Server kernel: scsi_verify_blk_ioctl: 36 callbacks suppressed Jul 24 11:37:33 Storage_Server kernel: hdparm: sending ioctl 2285 to a partition! Jul 24 11:37:34 Storage_Server last message repeated 5 times Jul 24 11:37:34 Storage_Server kernel: smartctl: sending ioctl 2285 to a partition! Jul 24 11:37:34 Storage_Server last message repeated 3 times Jul 24 11:38:22 Storage_Server kernel: shfs[5593]: segfault at ffffffff ip ffffffff sp b3fff2d8 error 14 Jul 24 11:38:22 Storage_Server kernel: BUG: Bad page map in process shfs pte:0a000000 pmd:5488a067 Jul 24 11:38:22 Storage_Server kernel: addr:b3fd0000 vm_flags:00100077 anon_vma:f3aca190 mapping: (null) index:b3fd0 Jul 24 11:38:22 Storage_Server kernel: Pid: 7380, comm: shfs Not tainted 3.0.35-unRAID #2 Jul 24 11:38:22 Storage_Server kernel: Call Trace: Jul 24 11:38:22 Storage_Server kernel: [] print_bad_pte+0x147/0x159 Jul 24 11:38:22 Storage_Server kernel: [] zap_pte_range+0x231/0x319 Jul 24 11:38:22 Storage_Server kernel: [] unmap_page_range+0x14d/0x154 Jul 24 11:38:22 Storage_Server kernel: [] unmap_vmas+0x65/0x86 Jul 24 11:38:22 Storage_Server kernel: [] exit_mmap+0x65/0xbd Jul 24 11:38:22 Storage_Server kernel: [] mmput+0x1f/0x8f Jul 24 11:38:22 Storage_Server kernel: [] exit_mm+0xf9/0x101 Jul 24 11:38:22 Storage_Server kernel: [] do_exit+0x1db/0x274 Jul 24 11:38:22 Storage_Server kernel: [] ? dequeue_signal+0xa1/0x115 Jul 24 11:38:22 Storage_Server kernel: [] do_group_exit+0x65/0x8e Jul 24 11:38:22 Storage_Server kernel: [] get_signal_to_deliver+0x29b/0x2ae Jul 24 11:38:22 Storage_Server kernel: [] do_signal+0x5a/0xeb Jul 24 11:38:22 Storage_Server kernel: [] ? vfs_read+0x88/0xfa Jul 24 11:38:22 Storage_Server kernel: [] ? do_sync_write+0xc5/0xc5 Jul 24 11:38:22 Storage_Server kernel: [] ? vfs_writev+0x36/0x44 Jul 24 11:38:22 Storage_Server kernel: [] do_notify_resume+0x23/0x44 Jul 24 11:38:22 Storage_Server kernel: [] work_notifysig+0x13/0x19 Jul 24 11:38:22 Storage_Server kernel: [] ? _cpu_down+0xc4/0x1bc Jul 24 11:38:22 Storage_Server kernel: Disabling lock debugging due to kernel taint Jul 24 11:38:22 Storage_Server kernel: BUG: Bad page map in process shfs pte:22000000 pmd:5488a067 Jul 24 11:38:22 Storage_Server kernel: addr:b3fd1000 vm_flags:00100077 anon_vma:f3aca190 mapping: (null) index:b3fd1 Jul 24 11:38:22 Storage_Server kernel: Pid: 7380, comm: shfs Tainted: G B 3.0.35-unRAID #2 Jul 24 11:38:22 Storage_Server kernel: Call Trace: Jul 24 11:38:22 Storage_Server kernel: [] print_bad_pte+0x147/0x159 Jul 24 11:38:22 Storage_Server kernel: [] zap_pte_range+0x231/0x319 Jul 24 11:38:22 Storage_Server kernel: [] unmap_page_range+0x14d/0x154 Jul 24 11:38:22 Storage_Server kernel: [] unmap_vmas+0x65/0x86 Jul 24 11:38:22 Storage_Server kernel: [] exit_mmap+0x65/0xbd Jul 24 11:38:22 Storage_Server kernel: [] mmput+0x1f/0x8f Jul 24 11:38:22 Storage_Server kernel: [] exit_mm+0xf9/0x101 Jul 24 11:38:22 Storage_Server kernel: [] do_exit+0x1db/0x274 Jul 24 11:38:22 Storage_Server kernel: [] ? dequeue_signal+0xa1/0x115 Jul 24 11:38:22 Storage_Server kernel: [] do_group_exit+0x65/0x8e Jul 24 11:38:22 Storage_Server kernel: [] get_signal_to_deliver+0x29b/0x2ae Jul 24 11:38:22 Storage_Server kernel: [] do_signal+0x5a/0xeb Jul 24 11:38:22 Storage_Server kernel: [] ? vfs_read+0x88/0xfa Jul 24 11:38:22 Storage_Server kernel: [] ? do_sync_write+0xc5/0xc5 Jul 24 11:38:22 Storage_Server kernel: [] ? vfs_writev+0x36/0x44 Jul 24 11:38:22 Storage_Server kernel: [] do_notify_resume+0x23/0x44 Jul 24 11:38:22 Storage_Server kernel: [] work_notifysig+0x13/0x19 Jul 24 11:38:22 Storage_Server kernel: [] ? _cpu_down+0xc4/0x1bc
  19. The Intel S1200BTS mainboard has One Gigabit Ethernet device 82574L connect to PCI-E x1 interfaces on the PCH and One Gigabit Ethernet PHY 82579 connected to PCH through PCI-E x1 interface, which both seem to be supported. Also this board has support for one Intel® Xeon® Processor E3-1200 Series or Intel® Core™ Processor i3-2100 Series in FC-LGA 1155 socket package. I have looked into the power supplies and followed your suggestion. I will be choosing a Seasonic X-850 to power this beauty. This will give about 100W extra power than needed on start-up according to my calculations when using 22 green drives and 2 black drives. The only reason I considered the second M1015 is purely for performance. PS - can't wait for them to fix the LSI card issue with RC6beta So far this is what I am looking at... UnRAID - 5.0rc5 Case - Norco RPC-4224 PSU - Seasonic X-850 Mobo - Intel S1200BTS (considering Supermicro X9SCM-F with more info) CPU - Intel Core i3 2130 3.40Ghz RAM - 8GB SAS Cards - IBM M1015 flashed to LSI9211-8i in IT Mode (to control 2 backplanes where the parity and cache drives will be to gain as much speed as possible) 2nd IBM M1015 flashed to LSI9211-8i in IT Mode, dual linked to a Intel RES2SV240 (dual link is to help with speed and will be used to control the remaining 4 backplanes) HDD - Various 3TB
  20. I am just wanting some feedback on the system I intend to build. For eg, anything that may not work or bug out from time to time... UnRAID - 5.0rc5 Case - Norco RPC-4224 PSU - Corsair 800W GS Series GS800 (single 12V rail) Mobo - Intel S1200BTS CPU - Intel Core i3 2130 3.40Ghz RAM - 8GB SAS Cards - IBM M1015 flashed to LSI9211-8i in IT Mode (to control 2 backplanes where the parity and cache drives will be to gain as much speed as possible) 2nd IBM M1015 flashed to LSI9211-8i in IT Mode, dual linked to a Intel RES2SV240 (dual link is to help with speed and will be used to control the remaining 4 backplanes) HDD - Various 3TB I will be using the 3 extra HDD's outside the of array...