AndreM

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by AndreM

  1. I tested it just updating my old docker, only editing the Repository to registry.gitlab.com/bockiii/deemix-docker and clicked Apply. It updated, and is running now. It took a couple of stops and starts though because notabug.org, where deemix-webui is hosted, is currently under a DDoS attack. If you get this message in the logs, then it's most likely because of the DDoS attack: [cont-init.d] First start, cloning repo Cloning into 'deemix'... fatal: unable to access 'https://notabug.org/RemixDev/deemix-pyweb.git/': The requested URL returned error: 504 When it starts up correctly, it's normal to see this in the log (at least, for me it is normal, it runs and I can access it:): Starting server at http://0.0.0.0:9666 * Serving Flask app "server" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off
  2. The CN_DNSBL (DNS Blacklist) looks like it might be coming from firewall blocker or DNS blackhole filter. Are you perhaps running such software, or going through a connection or ISP that is?
  3. I believe this issue resolved itself. I waited a while longer and noticed an entry in the log file: md: sync done. time=62278sec md: recovery thread: completion status: 0 The webui is still showing only 35% completion, making me think that the webui broke somewhere and stopped updating. I'll confirm that when I can restart the server. PS: As suggested, I created a post in the feature request forum section regarding the diagnostic output.
  4. I would like to request the option to choose what goes into the Diagnostics that's created, or a way to encrypt certain portions of it and only sharing that with specific people. I recently had a support issue and was asked to provide the support diagnostics. After generating it and going through the output I was reluctant to share it in it's entirety publicly. Likewise, other users were reluctant to help me unless supply the whole diagnostic output. Specifically the output that concerned me was: The mover log (when enabled) writes to the syslog, and the syslog is included, containing the full paths of files that are moved. Perhaps the mover log could be written to another file and selectively included? I realise you can disable mover logs, but by the time you need to generate the diagnostic output it's probably too late to disable the mover log. The lsof output reveals a lot about your local network, such as which IP addresses and ports are listening for services and active connections. Even though the unraid server itself might not be accessible from the internet some of the VM's might be and this was not information I was willing to share as it reveals a lot about the local network, including remote connection IP addresses. I understand that networking information can be useful for troubleshooting certain types of problems, but it's not always the case for all problems. The process list contains the full command lines for all processes also revealing a lot of information about what you're running on the system, including processes inside of dockers. This also includes ports that could be public facing (for example, a bittorrent port specified on the commandline, which you also have open on the internet) The vars output includes the serial numbers of drives, making them traceable . The config/disk output contained usernames for the permissions (this was masked in the shares output)
  5. Of course, and the server is not accessible directly, but some VM's are accessible and the diagnostic information does reveal a lot about my internal network if someone gained access in some other way. It wasn't quite expecting to be discussing the support diagnostics in this post though! :-)
  6. Thank you. I chose the anonymized option, but I still feel there's unneeded personal information. A few examples I spotted while going through the files just now were log entries for the cache mover naming full paths of files it's moving, lsof output showing port numbers and ip addresses and process listing showing full process command lines. I'll log this as a feature request, but it's obviously not going to help me right now :-)
  7. Thanks for replying Johnnie. Is there a specific file you want to see? The Diagnostic output includes much more information than I feel comfortable sharing publicly (such as details of VM's, port mappings, ip addresses, docker processes and so on).
  8. Hi there, I'm using unRaid 6.4.1, with a Pro license, 1 parity drive, 10 data drives and a cache drive. Recently I replaced a 4TB parity with an 8TB, which worked perfectly. Procedure I used was to remove the old drive, plug in the new one and chose the new parity and let it rebuild parity. I wanted to use the old parity drive to replace an old 1TB data disk. So after the parity rebuild completed I shut down the array again, removed the 1TB data drive and plugged in the old 4TB. I did not run pre-clear on it. When unraid started up it said the 1TB was missing and I chose the now unallocated 4TB and it started up the array, marking the drive as emulated and started a data rebuild, with an estimated time of about 24 hours. The problem is, it's been running for 17 hours now, and for at least the last 4 hours, the percentage complete has not changed. It looks like it's been stuck at these values for the last 4 hours or more: Total Size: 4TB Elapsed time: 17 hours (this one is increasing) Current position: 1.42 TB (35.4 %) Estimated speed: 59.8 MB/sec Estimated finish: 12 hours The 'Writes' column in the Main device list is also not increasing anymore, staying at 2,983,484, and reads at 45. I have activity LED's on my hot-swap bay and can see the drive's light is staying on, even though none of these values are increasing. The log file is showing a WRITE DMA failure, followed by a hard reset of the SATA link for one of the controllers, not sure how to tell which drive that relates to. So now I'm not sure what I should do. Should I let it run for a few more hours, or should I try to restart the array?
  9. That could also explain why mine are working. I'm using a static IP configuration on unRaid.
  10. I'm also using pfSense (on dedicated, bare-metal) as my default gateway and DNS server, and my dockers are all working. I'm even using a few of Sparkly's dockers, and I've not run into DNS issues with any of them. They are all set to auto-start and my server restarts at least twice a week due to constant power cuts in my country.
  11. Yup, I see that the direction and the delete keys indeed aren't working. I pinged hurricane about it since he develops and maintains the dockergui base. Great, thanks aptalca.
  12. Hi Aptalca. Your Calibre-RDP container is great, just finished moving my whole library onto it. I did have a bit of trouble in the beginning because I didn't read the documentation properly . I assumed that the /config was for config only, so I configured the mappings as such: - /config mapped to /mnt/cache/appdata/calibre/config - /books mapped to /mnt/user/Books/Calibre (a new empty directory) - /downloads mapped to /mnt/user/Downloads (a place where I can place books to import from) and when the wizard popped up and asked me where to create the library, I chose /books and then started some imports to build the library. This worked great, until of course I tried to restart Calibre, or try to use the server, because (as you rightly mentioned), it defaults and expects it to be in /config. Anyway, I fixed it all up and now have /config mapped to /mnt/user/Books/Calibre - which works great. My question is, is it normal for my cursor keys and delete key to be non-functional in the Web RDP session? When editing book titles and authors I have to use the mouse to position my cursor in text input fields, and I have to use backspace instead of delete to delete text.
  13. Thanks Jon. Is there more information about what the DNS issue is? I don't seem to have any issues with my docker containers (currently on RC3), so just curious what the issue is and if it might show up for me when I do upgrade to RC4.
  14. I'm using TurboVNC (a fork of TigerVNC, which is a fork of TightVNC, which is a fork of RealVNC!) on Windows 8, and it works without any issues. As mentioned by itimpi, check your port. I use port 5900, not 5700 to connect. The VNC Server I specify in TurboVNC for my first VM is "tower:5900", second one is "tower:5901", etc. You should be able to see the VNC ports listed in the VMs tab of the unRAID WebGUI
  15. I would also like to see Pushbullet support
  16. Hahah, no worries , you're right, we all have that one in common too and it could be any docker using Python2.7. From your logs, do you know when you upgraded to RC3, to see if it started happening after that only?
  17. I saw a similar error in my syslog today and also assumed it was docker related. I'll see if I can find the entry. EDIT: Found it... May 25 17:17:07 unRAID kernel: python[29037]: segfault at 58 ip 000000000052c8d8 sp 00002b3de8c040e0 error 4 in python2.7[400000+2bd000] Here is my list of containers...maybe we have one in common. binhex-delugevpn binhex/arch-delugevpn:latest CouchPotato hurricane/docker-couchpotato:latest KODI-Headless sparklyballs/headless-kodi-helix:latest MariaDB needo/mariadb:latest MediaBrowser mediabrowser/mbserver:latest nzbgetvpn jshridha/docker-nzbgetvpn:latest Sonarr hurricane/docker-nzbdrone:latest John I have: binhex/arch-madsonic needo/couchpotato needo/deluge sparklyballs/headless-kodi-helix needo/mariadb needo/nzbdrone gfjardim/nzbget gfjardim/pyload Looks like headless-kodi is common between the three of us?
  18. Also completed an upgrade to RC3 recently, and I was so impressed with how easy it was to setup VM's that I shutdown my VMWare ESX server and transplanted the MB/CPU/RAM from that server into my unRaid server. I now have unRaid running on a Quad Core 3.4Ghz CPU with 32GB RAM and slowly migrating all the VM's I had running on ESX to it (well, rebuilding them and restoring data backups). It's really nice to be able to mount unraid shares directly inside a VM without having to bother with SMB/NFS and I now have one less machine contributing to my power bill! Only issue I picked up so far was this: May 25 22:55:45 pooh kernel: python[21802]: segfault at 58 ip 000000000052f1cb sp 00002aedb4a02ac0 error 4 in python2.7[400000+2bd000] .. May 26 00:04:05 pooh kernel: python[30095]: segfault at 58 ip 000000000052c8d8 sp 00002b4a97574140 error 4 in python2.7[400000+2bd000] May 26 01:03:44 pooh kernel: python[29942]: segfault at 58 ip 000000000052c8d8 sp 00002acb5e8926e0 error 4 in python2.7[400000+2bd000] May 26 01:08:37 pooh kernel: python[9038]: segfault at 58 ip 000000000052c8d8 sp 00002b87bacfb220 error 4 in python2.7[400000+2bd000] I presume that's caused by a docker, because Python has been removed from unRaid? Any idea what might be causing this, or where I can go look which docker is causing this? The syslog has a PID, but by the time it's logged that PID is gone. I also dont' know if this is RC3 related or because of the new MB/CPU.
  19. What is the location of the config file? I would like to make a backup of just appdata and my container configuration (The volume mappings, port mappings and other variables). I know where the appdata is, but where is the container configuration stored? Not entirely sure what is meant by config file, since it is typically in a mapped volume, probably part of appdata for most dockers. Perhaps he meant the template settings. Those are in /boot/config/plugins/dockerMan/templates-user Great, thank you. The contents of the templates-user directory was exactly what I was looking for!
  20. What is the location of the config file? I would like to make a backup of just appdata and my container configuration (The volume mappings, port mappings and other variables). I know where the appdata is, but where is the container configuration stored?
  21. Sorry for digging up this old topic, but I just ran into the same problem. Using Unraid 6.0-beta3 (I really need to upgrade!). Mounting a user share via NFS on Ubuntu I get random directories go 'missing', for example, this is on my Ubuntu system: user@gopher:/pooh/TV$ ls -l ls: cannot access Directory2: No such file or directory ls: cannot access Directory5: No such file or directory ls: cannot access Directory6: No such file or directory ls: cannot access Directory7: No such file or directory drwxrwxr-x 1 99 users 440 Jan 12 2013 Directory1 d??? ? ? ? ? ? Directory2 drwxrwxrwx 1 99 users 248 Aug 5 01:17 Directory3 drwxrwx--- 1 99 users 952 Sep 27 02:43 Directory4 d??? ? ? ? ? ? Directory5 d??? ? ? ? ? ? Directory6 d??? ? ? ? ? ? Directory7 It is mounted as : pooh:/mnt/user/TV on /pooh/TV type nfs (rw,addr=192.168.2.61) Exported on unraid as: "/mnt/user/TV" -async,no_subtree_check,fsid=100 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) Syslogs on both machines show no errors. Funny thing is, if I telnet to the unraid server and then cd into each missing directory they re-appear on the nfs client side. Any ideas on how I can fix this? I've been using CIFS up to now, but due to an issue with CIFS I'm trying out NFS. Syslog, for reference: Sep 29 12:23:10 pooh emhttp: shcmd (219): /usr/local/sbin/emhttp_event svcs_restarted Sep 29 12:23:11 pooh emhttp_event: svcs_restarted Sep 29 12:23:11 pooh avahi-daemon[1307]: Service "pooh" (/etc/avahi/services/smb.service) successfully established. Sep 29 12:23:37 pooh rpc.mountd[2502]: authenticated mount request from x:684 for /mnt/user/TV (/mnt/user/TV) Sep 29 12:23:50 pooh rpc.mountd[2502]: authenticated unmount request from x:864 for /mnt/user/TV (/mnt/user/TV) Sep 29 12:24:08 pooh rpc.mountd[2502]: authenticated mount request from x:843 for /mnt/user/TV (/mnt/user/TV) Sep 29 12:35:50 pooh kernel: mdcmd (4169): spindown 5 Sep 29 12:55:50 pooh kernel: mdcmd (4170): spindown 0 Sep 29 14:14:40 pooh in.telnetd[22956]: connect from x (x) Sep 29 14:14:41 pooh login[22957]: ROOT LOGIN on '/dev/pts/1' from 'x'
  22. Hi there, Apologies for only replying now. I've done the memory checks on both systems, didn't find any errors. I've also done fsck's on the disks, which found no errors and a full parity check which found no errors. Hardware of the Unraid server: ASUS P5B Motherboard Intel Core 2 6600 2.4Ghz 3GB RAM Onboard Realtek RTL8111B Onboard Gigabit Ethernet Onboard SouthBridge Intel P965/ICH8 (4 SATA Ports) Onboard JMicron JMB363 Providing (1 SATA Port) PCI Express Adaptec 1430SA (2 SATA Ports) PCI Express Adaptec 1430SA (2 SATA Ports) That's 9 ports in total - I have four 2TB disks and five 1TB disks connected. The Ubuntu server I was copying from is a VM running on my VMWare ESX Server along with many other VM's. (The unraid server is not virtualised, it runs on dedicated hardware) However, since I posted this the problem has not re-occurred and I've been unable to reproduce it, so I'm marking this issue as solved for now.
  23. Hi there. I am not sure if this is related to RC5 or some other issue, but either way, I'd like to try and resolve it . I'm running unraid version 5.0-rc5, with 9 disks, a mix of 2TB and 1TB disks. Currently there is 2.9TB free disk space in total, with 100GB free on the disk with the least free disk space. I have an Ubuntu 10.04.4 server that I use as a staging area to copy videos from and for months I've been using it like this and never noticed anything wrong until a few days ago. One video file I copied from this Ubuntu server (using samba mounts) to my unraid server wouldn't play from the unraid server. I did an md5 hash on the ubuntu server of the file, and then did a telnet to the unraid server and did an md5 hash on the server and I'm getting a different hash. So somewhere the file got corrupted. When I try to to run the md5 hash on the Ubuntu server against the samba share I get an Input/output error. I'm using a user share here, with min free space of 3000000 and the file I'm copying's file size is 670MB. I also have to add that I'm having difficulty reproducing the problem consistently. I deleted the file, copied it again, and it was still corrupt. Then I deleted it again but this time copied it from Windows, and the file was not corrupt. Then I deleted it again, copied it from Ubuntu, and the file was not corrupt. Does anyone have any suggestions of where I can go look next? The unraid server's log files look clean, nothing in syslog or samba log files. EDIT: I marked this as solved, because I cannot reproduce the problem.
  24. SAMBA recently announced a security vulnerability that allows remote root execution from an anonymous (un-authenticated) connection. Is unRaid also affecated? Probably not that big a deal for unRaid as in most environments it won't be exposed to users that would want to gain unauthorized access.
  25. I know I already marked this as solved, but just a follow up in case someone else has a similar issue. After repairing the filesystem I longer got the 'kernel oops' problems, but my unRaid system kept rebooting after a few hours of use. I think one of these random reboots is what caused the file system corruption in the first place. Having experienced similar issues before with a randomly rebooting system I suspected the powersupply. It was a RaidMax 630W modular powersupply. It served me well, but it was over 7 years old. I replaced it with a Corsair CX600 and the system has been rock solid since. The system currently has 7 drives in it, but is designed to take up to 12 LP/green drives.