DiscoDuck

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by DiscoDuck

  1. I'm trying to get a script to transfer files from my cache to the remote mount, this should be triggered when the cache is >70% and then transfer files until cache is <70%. I'm not very knowledgeable when it comes to this stuff so I've come up with this by studying others work on the mighty interwebs. Below transfers 1 file and then the script completes. And this loops the script, but it continues even after the cache drive utilization is below 70%. I suspect that the mistake I've made is that the first string with "df -h ...." is stale and that the subsequent commands is using the initial value all the time? Can anyone help me get this script working as I intend?
  2. As a workaround I've created a cron job in user scripts with the below docker exec -d webgrabplus /defaults/update.sh If the container is updated the license must be forced manually the first time.
  3. No, I do it the same way as you do it - manual update
  4. Is the container shipped with the beta patch? It hasn't worked for me for quite some time now and my log file states 3.2.1.0
  5. Ok. When I just do ffmpeg I get below output, and libxml2 isn't enabled. Maybe I've missed something basic in the configuration? ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 9.3.0 (Alpine 9.3.0) configuration: --prefix=/usr --enable-avresample --enable-avfilter --enable-gnutls --enable-gpl --enable-libass --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libx264 --enable-libx265 --enable-libtheora --enable-libv4l2 --enable-libdav1d --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-libxcb --enable-libssh --disable-stripping --disable-static --disable-librtmp --enable-vaapi --enable-vdpau --enable-libopus --enable-libaom --disable-debug libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 Hyper fast Audio and Video encoder usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}... Use -h to get full help or, even better, run 'man ffmpeg'
  6. Would it be possible to update the ffmpeg shipped with 4.2 so it includes libxml2? Then it should be possible to pipe mdp streams.
  7. From here https://github.com/XMLTV/xmltv#building
  8. Think it might be wrong perl version. Container is shipped with 5.30, on https://github.com/XMLTV/xmltv they state
  9. Yes, I've tried with docker exec -it -u abc tvheadend /usr/bin/tv_grab_eu_xmltvse --configure on my host and /usr/bin/tv_grab_eu_xmltvse --configure from the console in the Docker, same result. It fails and creates an empty TMP file .xmltv/tv_grab_eu_xmltvse.conf.TMP. I never get to the part where I get to select country etc. The picons I compared with came from picons.eu so that's the same source right? https://github.com/picons/picons/archive
  10. I'm trying to configure tv_grab_eu_xmltvse, when I try to configure it with --configure I get the below and configuration fails. https://pastebin.com/rYmgdkYs Noted that the xmltv version shipped with the Docker container is 0.6.1, latest is 0.6.3 (not sure if it's intentional) I've also noticed that some picons are not updated when comparing with the source, e.g. vfilmfamily.
  11. Copied from the unassigned drive to another computer. If it's a network issue, shouldn't all transfers been affected? Worked fine when copying from the cache pool.
  12. Yes, that's understandable. But why does the Unassigned device get affected? And the Rclone mount? And why does the network all of a sudden cut out? Is it possible to increase the nice value of the parity check to let other processes have priority during the check?
  13. Recently upgraded to 6.6.6 and I'm now doing a parity check. Tried to watch an episode while doing it and it constantly buffers and even looses network connection when the transfer crawls to a halt (code snippet below). Tried to copy the same episode to a "normal" Win 10 computer, incredibly slow. unraidd process is at 100%, when it briefly comes down towards 80-90% the speed improves until unraidd is back at 100%. Tried to transfer from my Rclone-Gdrive share to the Win10 machine, also super slow, Transferred some files from the cache drive to Win10, obtain full speed. Been watching tv from my tvheadend docker without any issues, sonarr, nzbget etc works fine. Use samba for all network transfers. Edit: Also tried to copy from a unassigned drive, same issue as above. Have never experienced this prior the upgrade. What can be the cause? Dec 11 14:56:21 Unraid kernel: e1000e 0000:06:00.0 eth0: Reset adapter unexpectedly Dec 11 14:56:22 Unraid kernel: br0: port 1(eth0) entered disabled state Dec 11 14:56:24 Unraid ntpd[1654]: Deleting interface #1 br0, 192.168.10.20#123, interface stats: received=180, sent=181, dropped=0, active_time=106935 secs Dec 11 14:56:24 Unraid ntpd[1654]: 142.147.92.5 local addr 192.168.10.20 -> <null> Dec 11 14:56:24 Unraid ntpd[1654]: Deleting interface #4 br0, fe80::6d88:ff20:fe29:f104%12#123, interface stats: received=0, sent=0, dropped=0, active_time=106930 secs Dec 11 14:56:26 Unraid kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Dec 11 14:56:26 Unraid kernel: br0: port 1(eth0) entered blocking state Dec 11 14:56:26 Unraid kernel: br0: port 1(eth0) entered forwarding state Dec 11 14:56:28 Unraid ntpd[1654]: Listen normally on 5 br0 192.168.10.20:123 Dec 11 14:56:28 Unraid ntpd[1654]: Listen normally on 6 br0 [fe80::6d88:ff20:fe29:f104%12]:123 Dec 11 14:56:28 Unraid ntpd[1654]: new interface(s) found: waking up resolver
  14. Have anyone managed to setup a reverse proxy with Lets encrypt? If so, would you be able to share how you did it?
  15. I´m considering starting to use unRAID again after a couple of years using other systems. My server has been around for quite some time and consist of older hardware: X7DBE with 2 x X5450, 64Gb RAM and 3 x AOC-SAT2-MV8. 16 HDD´s of various sizes, average size 3TB. 2 SSD x ~120GB. 3 Gbit NICs and IPMI. As of now I´m using Proxmox with Arch & Windows VM´s, some Docker containers running on the Arch host. Disks are pooled using MergerFS and parity is done by Snapraid. Overall the system is working quite nicely, but at times I get tired of cli tinkering and figuring things out. So one could say that I´m getting old and lazy and looking for a convenient GUI and less hassle to get everything working… It seems that unRAID have matured a lot since I last used it, from what I´ve been reading Docker containers and VM´s seems to be deployed without issues. Is this the general consensus? I´ve been running my disks and Arch host fully encrypted, from what I understand unRAID doesn´t support encryption and this is pretty much the only negative I´ve been observing since I started to look into using it again. Is encryption something that´s on the roadmap in a not to distant future? What I´m looking to get out from transferring to unRAID. 1: A more power efficient system. X5450 is not low-power and efficient and there´s a bunch of drives in there as well. I´ve been looking into user shares and the possibility to store all episodes of a season onto one disk seems highly desirable. In order to leave some wiggle room for quality updates, what is a suitable minimum space setting in order to avoid ending up with full disks, 100GB? 2: I´m planning to use a 4TB disk for nzbget & rutorrent. There´s no need to protect this data so to my understanding I should mount this disk to the Docker containers using the Unassigned device plugin? 3: I only have 2 rather small SSDs. I would prefer to just use them for Docker containers and VMs. I will run 2 Windows 10 VMs. I´m planning to use about the same amount of containers and VM´s as I do today so there should be sufficient space for that. But what about cache drive? I would like to utilize the smallest HDD I have (1TB) as cache drive, is this possible? In order to obtain this, do I need to mount the SSDs as UD as well, or is this considered bad practice? 4: Any known issues with LACP?