barrystaes

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by barrystaes

  1. Downloading not so much.. but if you receive the channel unencrypted you could record it. For example a Plex (DVR) and/or TvHeadend docker on your Unraid server + a networked HDHomerun 4DC tuner.
  2. Sorry to resurrect. Have you found a cause? Same here; duplicate files (without and with tracknumber prefix) Seems it copies+renames but wont delete old file. I noticed all tracks (within an album) with tracknumber are modified one minute later than the ones without.. And not by the mover i think, but by Headphones docker when its importing / sorting the files. My setup also won't delete the folders that where processed using the "force import" option. File/folder owner is nobody and group is users though..
  3. Exactly. I understand that /mnt/user/ combines /mnt/disk1/ /mnt/disk2/ and /mnt/cache/ as one view. My Linux foo is not that strong .. how can i find the Hard Links in my entire drives/shares? These are between different folders on potentially different disks.. Manually looking in every folder combination by running find . -samefile /path/to/file is a lot of work. And ... can the dockers even create hard links between two Shares ? Since its not recommended by Lime Technology on this forum, but dockers might try anyway. (so far nothing broke yet, and i would like to keep it this way..)
  4. I am using Unradid 6.3.5 with LS.io dockers Plex, Sonarr, and Deluge. I've always had Sonarr move the file that Deluge completed, and in order for that to work i put the download folder inside my Media share so it really was instantanious (within same share / disks). This worked really well, no hard links involved. Fast forward to last week, when i decided to break improve this by storing downloads in a dedicated Downloads share thats set to Prefer cache disk. (in hindsight, i remember the gain was minimal because the new downloads stored in that Media share still used the Cache disk upto 24 hours before the mover kicked in) Question 1: My first mistake was that i somehow had Sonarr (and Radarr) use Hard Links, instead of move or copy. I mounted the Sonarr docker /media to /mnt/user/Media/ ..and on this forum i read now that hard links are not supported. So i'd like to undo the mess i made. How to get rid of these? Can i just delete stuff from /mnt/cache/ and have the "other" link survive? Question 2: What would be the best way to have dockers for Deluge and Sonarr work together, but have Deluge not download to my Media share?
  5. FYI - This Unraid plugin allows me to set a timeout delay per docker and starting them in a specified order. Have been using it for a while now and it helps avoid false alarms in interdependant dockers.
  6. When updating node-red container, i got this error: Removing orphan image: 0188edb887fd Error: Error code The downloads and update and start went OK.. but Unraid could note remove this orphaned image. Does this indicate a problem that needs fixing?
  7. Well since i seem to be making progress (aka having luck) i'll keep documenting so that others can learn from this, or tell me how its supposed to be done. Current state: after a reboot and increasing the docker image size dockers work but still no webinterface. I was able to change the docker image size because the webinterface momentarily worked, and had to force a reset via the console. I'm hoping that the webinterface magically starts working after 20 hours of uptime like it did last time, but thats not a solution. edit: webinterface just started working now, after 11,5 hours of uptime. No idea why it took so long.. starting a check right now to rule out that one kind of gremlin.
  8. Since i cant work on this again for the next few hours i descided to just reboot this server and hope for the best. When clicking Stop the drive array, it now just says "Unmounting disks...Retry unmounting disk share(s)...". And i cant find out whats blocking it. root@bTower:~# ls /mnt disk1/ disks/ root@bTower:~# lsof | grep /mnt/disk1/ root@bTower:~#
  9. Here it says but i cant find what that procedure is. Or is that just referring to the image hash that needs to be computed?
  10. Perhaps the mover moved the docker.img to disk1? Should i move it to cache?
  11. And all of a sudden (about 18 hours after the update) the Dynamix web interface started working, and even on the custom port 8888 that i previously set it. Strange .. as the machine was idling. The docker tab is missing.. so not sure what to do now.. and if doing a reboot now is unsafe or recommended instead.. insights welcome.
  12. No i had not read the "release notes" in depth, but i have read the "upgrade notes" just above that which only list reasonable 3 points. I had no reason to believe there would be more "upgrade notes" hidden below the "release notes" just below that. I skimmed over those having already read those in the blog.. Here i had typed a really satisfying rant on Unraid info being chaotic here (call it constructive critisism) but that wont help me solve this problem now. So i removed it. Anyway, what could i have missed? I'm reading it now but besides not disabling auto-start i'm not aware of anything that i did wrong. How can i disable autostart from the console / without the web interface?
  13. Odd. The 6.2 release thread points to this thread about 6.1 for plugin support. Does that mean that any plugin that worked in 6.1.x is expected to work OK in 6.2 ? That would be worth pointing out explicitly.
  14. I only just figured out that i can run "diagnostics" at the console, sorry. I attached the resulting ZIP file to my original top post. And /var/log/syslog says its at /mnt/disk1/docker.img and i checked, its actually there. And 10G big!? ...wow.
  15. OK.. here is all the output i think might be relevant. Look at the docker errors.. I'm no expert but the only errors i found when digging via SSH are: root@bTower:/mnt/cache# docker ps Cannot connect to the Docker daemon. Is the docker daemon running on this host? root@bTower:/mnt/cache# tail /var/log/docker.log time="2016-10-14T10:03:19.713510985+02:00" level=error msg="migration failed for 4921b0eb883769bdb272078d0ca0889b0b1d1d9e808311ca208d8f6ae341e5d5, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.713527499+02:00" level=error msg="migration failed for 54f1994713ff8706ba5d602a0c4bffdae7f923c0f7cb113fbf2c291e8b9da7c6, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.713543989+02:00" level=error msg="migration failed for 283c3da6cfa67eb155d3e745e7f7e2d4054170cb5b71dfa089931ce30a07f354, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.713560468+02:00" level=error msg="migration failed for 797f5e4995e41ec89f013370a107c8dabb07790ba2c7814125bd770ae3a0ff09, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.713576754+02:00" level=error msg="migration failed for a1d1943feb6efbd24717645960729f060a44da792b67f59399afb15ab35a4b9b, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.713593781+02:00" level=error msg="migration failed for 9168b1d3ced4cc883ec1f6f82595eb19859236177d2094acb47dc52c8de32bf7, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.713610168+02:00" level=error msg="migration failed for fda61ed038fa3277d2bfe121b797869541734c7b88da8e64d02e5a5fd1d0a2ed, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory" time="2016-10-14T10:03:19.889907013+02:00" level=error msg="Graph migration failed: \"open /var/lib/docker/.migration-v1-images.json: no space left on device\". Your old graph data was found to be too inconsistent for upgrading to content-addressable storage. Some of the old data was probably not upgraded. We recommend starting over with a clean storage directory if possible." time="2016-10-14T10:03:19.889966418+02:00" level=info msg="Graph migration to content-addressability took 36003.30 seconds" time="2016-10-14T10:03:20.103472277+02:00" level=fatal msg="Error starting daemon: Error initializing network controller: error obtaining controller instance: mkdir /var/lib/docker/network: no space left on device" root@bTower:/mnt/cache# root@bTower:/# tail /var/log/syslog Oct 14 12:28:26 bTower emhttp: err: shcmd: shcmd (221): exit status: 1 Oct 14 12:31:29 bTower sshd[29498]: Accepted password for root from 192.168.4.200 port 57006 ssh2 Oct 14 12:43:55 bTower emhttp: shcmd (236): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/disk1/docker.img' /var/lib/docker 10 |& logger Oct 14 12:43:55 bTower root: /mnt/disk1/docker.img is in-use, cannot mount Oct 14 12:43:55 bTower emhttp: err: shcmd: shcmd (236): exit status: 1 Oct 14 12:59:23 bTower emhttp: shcmd (251): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/disk1/docker.img' /var/lib/docker 10 |& logger Oct 14 12:59:23 bTower root: /mnt/disk1/docker.img is in-use, cannot mount Oct 14 12:59:23 bTower emhttp: err: shcmd: shcmd (251): exit status: 1 And for good measure, whats currently running: root@bTower:~# top top - 12:31:41 up 12:29, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 275 total, 1 running, 273 sleeping, 0 stopped, 1 zombie %Cpu(s): 0.0 us, 0.1 sy, 0.0 ni, 99.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 6172348 total, 314412 free, 106320 used, 5751616 buff/cache KiB Swap: 0 total, 0 free, 0 used. 4933860 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1654 root 20 0 9680 2568 2112 S 0.3 0.0 1:13.57 cpuload 29559 root 20 0 16600 2804 2224 R 0.3 0.0 0:00.04 top 1 root 20 0 4372 1648 1540 S 0.0 0.0 0:09.72 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:03.10 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ 7 root 20 0 0 0 0 S 0.0 0.0 0:26.76 rcu_preempt 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_sched 9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 10 root rt 0 0 0 0 S 0.0 0.0 0:00.13 migration/0 11 root rt 0 0 0 0 S 0.0 0.0 0:00.06 migration/1 12 root 20 0 0 0 0 S 0.0 0.0 0:00.09 ksoftirqd/1 14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:+ 15 root rt 0 0 0 0 S 0.0 0.0 0:00.07 migration/2 16 root 20 0 0 0 0 S 0.0 0.0 0:00.08 ksoftirqd/2 17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/2:0 19 root rt 0 0 0 0 S 0.0 0.0 0:00.09 migration/3 root@bTower:~# ps -f UID PID PPID C STIME TTY TIME CMD root 29525 29498 0 12:31 pts/0 00:00:00 -bash root 29851 29525 0 12:32 pts/0 00:00:00 ps -f root@bTower:~# free total used free shared buff/cache available Mem: 6172348 105744 314824 588364 5751780 4934288 Swap: 0 0 0 root@bTower:/mnt/cache# df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 3017320 585464 2431856 20% / tmpfs 3086172 188 3085984 1% /run devtmpfs 3017336 0 3017336 0% /dev cgroup_root 3086172 0 3086172 0% /sys/fs/cgroup tmpfs 131072 2928 128144 3% /var/log /dev/sda1 15621120 380952 15240168 3% /boot /dev/md1 5859568668 5044055040 815513628 87% /mnt/disk1 /dev/md2 5858435620 2513603448 3344832172 43% /mnt/disk2 /dev/sdb1 488386552 109418000 378409024 23% /mnt/cache shfs 11718004288 7557658488 4160345800 65% /mnt/user0 shfs 12206390840 7667076488 4538754824 63% /mnt/user /dev/loop0 10485760 9604996 655004 94% /var/lib/docker /dev/loop1 1048576 16804 926300 2% /etc/libvirt Again, that last line shows another Docker issue: use = 94%. But that might be expected, i dont know.
  16. First off, i'm really happy with Unraid and its dockers, so i was eager to try out the new 6.2 release! After updating all plugins and dockers, yesterday i descided to take the gamble and click the update button besides the Unraid listing on the "plugin" page in the dynaMix web interface. Everything was working well. After clicking that button, it downloaded some files, asked for a reboot, so i stopped the array (couldnt unmount untill i stopped a screen session i apparently left running) and it went down. When it came online i found that while i can ping, and ssh, and the shares work, that the webinterface and docker containers do not work. Its idling and i see no change after 12 hours of waiting. I have no idea where to look for errors and am still/forever learning Linux .. suggestions most welcome! Below all details i think might be relevant. I suspect something with Docker.. i did forget to disable the autostart. How to disable that from commandline? I could not find it online. What i also could not find online in v6 documentation, was how to use the "safe mode" thats apparently a feature added in older versions. Has it been deprecated? btower-diagnostics-20161014-1341.zip
  17. Thanks! Also thanks for this notice in the plugin: Though that site confused me because i mistook the kernel versions for (really old..) Unraid versions. Also https://github.com/theone11/virtualbox_plugin hasn't been updated for a while, so i had the impression this was being abandoned until i found your post here. Just to be sure - I must admit i have not read the entire thread, but is it safe to say this VirtualBox plugin "just works" when updating to Unraid 6.2 with existing VMs? PS. At first i was looking for something like "Fixed unRAID v6.2 compatibility" in the plugin changelog.. which is where i naively expected this info.
  18. I haven't done much testing with dockers (will do..) but i think its because i dont use the default 80 port for Dynamix.
  19. Thanks for the headsup. So i take it this fixes the Dynamix web interface going down? http://lime-technology.com/forum/index.php?topic=44637.msg442500#msg442500 I cant use virtualbox plugin without crashing the webinterface..
  20. I looked at this as a faster alternative to import data from my QNAP NAS devices.. but seeing that this only hits 1.5MB/s .. f*** Unraid really needs a "Unraid data import" plugin/docker provided by Lime Technology.. people (new customers / newbee's like me!) have terrabytes of data and want the fastest way to move it. So far my best guess is a CP (its faster) and a RSYNC afterwards to make sure i didn't miss anything. But resuming the CP is hard, i still really dont know what it'll do when it encounters an already existing file.. I look into using FTP because RSYNC is to slow, CP cant resume, and mapping via SMB is another performance hit. Also, all attempts to disable RSYNC encryption made no difference. Also, i once found that unencrypted FTP is the fastest transfer to/from my QNAP devices.
  21. Hmm odd.. when i (only) add (and connect as) user "root" it does work.
  22. In my new Unraid 6.1.7 server i enabled FTP and added the user "barry". But i cant connect: When i connect with Filezilla, it says Status: Connecting to 192.168.2.30:21... Status: Connection established, waiting for welcome message... Status: Insecure server, it does not support FTP over TLS. Command: USER barry Response: 331 Please specify the password. Command: PASS *********** Response: 500 OOPS: priv_sock_get_result Error: Critical error: Could not connect to server Status: Disconnected from server Status: Connecting to 192.168.2.30:21... Status: Connection established, waiting for welcome message... Status: Insecure server, it does not support FTP over TLS. Command: USER barry Response: 331 Please specify the password. Command: PASS *********** Response: 500 OOPS: priv_sock_get_result Error: Critical error: Could not connect to server Status: Disconnected from server Status: Connecting to 192.168.2.30:21... Status: Connection established, waiting for welcome message... Status: Insecure server, it does not support FTP over TLS. Command: USER barry Response: 331 Please specify the password. Command: PASS *********** Response: 500 OOPS: priv_sock_get_result Error: Critical error: Could not connect to server And in the Unraid log, i see: Feb 9 09:13:47 bTower vsftpd[30744]: connect from 192.168.10.10 (192.168.10.10) Feb 9 09:13:47 bTower kernel: nf_conntrack: falling back to vmalloc. Feb 9 09:13:47 bTower kernel: nf_conntrack: falling back to vmalloc. Feb 9 09:13:47 bTower kernel: vsftpd[30744]: segfault at 0 ip 00002b42c216fd16 sp 00007fff1ce320b8 error 4 in libc-2.17.so[2b42c2039000+1bf000] Feb 9 09:13:55 bTower vsftpd[30772]: connect from 192.168.10.10 (192.168.10.10) Feb 9 09:13:55 bTower kernel: nf_conntrack: falling back to vmalloc. Feb 9 09:13:55 bTower kernel: nf_conntrack: falling back to vmalloc. Feb 9 09:13:55 bTower kernel: vsftpd[30772]: segfault at 0 ip 00002b2c0d14ad16 sp 00007fffb5baa438 error 4 in libc-2.17.so[2b2c0d014000+1bf000] Feb 9 09:14:31 bTower vsftpd[30938]: connect from 192.168.10.10 (192.168.10.10) Feb 9 09:14:31 bTower kernel: nf_conntrack: falling back to vmalloc. Feb 9 09:14:31 bTower kernel: nf_conntrack: falling back to vmalloc. Feb 9 09:14:31 bTower kernel: vsftpd[30938]: segfault at 0 ip 00002b722f0c1d16 sp 00007ffd6630b188 error 4 in libc-2.17.so[2b722ef8b000+1bf000] The segfault troubles me.. its consistent. Anyone else seen things like this? (also, this hardware performed fine before i installed Unraid but i'll run a memcheck regardless, just in case)
  23. I have the same problem: Dynamix webinterface not responding. Other webinterfaces of dockers continue to respond nominal. I had this happen a few times already, all v6 and last week on a near-new installation. When its idling, nothing goes wrong. This time it was restoring disk1 (i put in a 6TB hdd) and i enabled virtualbox. It worked at that time, but not shortly after. The syslog is: Feb 5 19:10:35 bTower dnsmasq[13042]: read /etc/hosts - 1 addresses Feb 5 19:10:35 bTower dnsmasq[13042]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Feb 5 19:10:35 bTower dnsmasq-dhcp[13042]: read /var/lib/libvirt/dnsmasq/default.hostsfile Feb 5 19:10:35 bTower kernel: virbr0: port 1(virbr0-nic) entered disabled state Feb 5 19:16:02 bTower rc.virtualbox[16378]: Plugin configuration written Feb 5 19:16:22 bTower emhttp: cmd: /usr/local/emhttp/plugins/virtualbox/scripts/rc.virtualbox install Feb 5 19:16:22 bTower rc.virtualbox[16555]: Installing Virtualbox package (v5.0.14)... Feb 5 19:16:31 bTower groupadd[16820]: group added to /etc/group: name=vboxusers, GID=999 Feb 5 19:16:31 bTower groupadd[16820]: group added to /etc/gshadow: name=vboxusers Feb 5 19:16:31 bTower groupadd[16820]: new group: name=vboxusers, GID=999 Feb 5 19:16:32 bTower kernel: vboxdrv: Found 8 processor cores Feb 5 19:16:32 bTower kernel: vboxdrv: TSC mode is Invariant, tentative frequency 2659870408 Hz Feb 5 19:16:32 bTower kernel: vboxdrv: Successfully loaded version 5.0.14 (interface 0x00240000) Feb 5 19:16:32 bTower kernel: VBoxNetFlt: Successfully started. Feb 5 19:16:32 bTower kernel: VBoxNetAdp: Successfully started. Feb 5 19:16:32 bTower kernel: VBoxPciLinuxInit Feb 5 19:16:32 bTower kernel: vboxpci: IOMMU found Feb 5 19:16:32 bTower rc.virtualbox[17062]: Installation of Virtualbox package v5.0.14 succeeded Feb 5 19:16:32 bTower rc.virtualbox[17063]: /boot/custom/vbox does not exists - Cannot change VirtualBox symlink Feb 5 19:16:32 bTower rc.virtualbox[17064]: Installing Virtualbox Extension package (v5.0.14)... Feb 5 19:16:35 bTower rc.virtualbox[17154]: Installation of Virtualbox Extension package v5.0.14 succeeded Feb 5 19:23:25 bTower rc.virtualbox[19622]: Plugin configuration written Feb 5 19:23:33 bTower emhttp: cmd: /usr/local/emhttp/plugins/virtualbox/scripts/rc.virtualbox start_vboxwebsrv Feb 5 19:23:33 bTower rc.virtualbox[19846]: /opt/VirtualBox/VBoxManage setproperty websrvauthlibrary null Feb 5 19:23:33 bTower rc.virtualbox[19861]: vboxwebsrv service started Feb 5 21:29:06 bTower sshd[17897]: Accepted password for root from 192.168.2.89 port 50513 ssh2 Feb 5 21:36:34 bTower emhttp: main: can't bind listener socket: Address already in use Note after i used the webinterface to enable virtualbox, it stopped working. Perhaps the line Feb 5 19:23:33 bTower emhttp is related..? So i tried restarting the web interface: killall emhttp nohup /usr/local/sbin/emhttp & But to no avail. Then i remembered i changed the Dynamix webinterface port from 80 to 8888, because i needed port 80 for a docker. I cant remember where i set that. I'd rather not powercycle and lose the preclear its doing on my new drive.. update 1 - To the powers that be: perhaps changing the Dynamix port would be a good feature to add. update 2 - i just unforgot how i set the Dynamix port. http://lime-technology.com/forum/index.php?topic=24730.0 root@Tower:/var/log# killall emhttp emhttp: no process found root@Tower:/var/log# nohup /usr/local/sbin/emhttp -p 8888 & [1] 26291 root@Tower:/var/log# nohup: ignoring input and appending output to ‘nohup.out’ [1]+ Segmentation fault nohup /usr/local/sbin/emhttp -p 8888 root@Tower:/var/log# But.. nothing. what .. wait .. SEGFAULT ?! This same hardware (case+psu+mobo+ram+cpu) ran Windows for years without problems.. i really think its the software at fault here..
  24. I got a reply on this in another thread: One quick thing. Double checked that you do not have an ad blocker running or some other software/browser-setting to prevent pop-ups. You may have to whitelist your unRAID server. I just double checked, no ad is blocked.. But where in Unraid would a new window pop up? Should i leave the webinterface open or something? (which i didnt)