Leaderboard

Popular Content

Showing content with the highest reputation on 11/08/19 in all areas

  1. Tdarr Application: Tdarr - https://github.com/HaveAGitGat/Tdarr Docker Hub: https://hub.docker.com/r/haveagitgat/tdarr GitHub: https://github.com/HaveAGitGat/Tdarr Documentation: https://github.com/HaveAGitGat/Tdarr/wiki Discord:https://discord.gg/GF8X8cq Reddit:https://www.reddit.com/r/Tdarr/ GitHub Summary: Tdarr is a self-hosted web-app for automating media library transcode management and making sure your content is in your preferred codecs. Designed to work alongside Sonarr/Radarr and built with the aim of modularisation, parallelisation and scalability, each library you add has its own transcode settings, filters and schedule. Workers can be fired up and closed down as necessary, and are split into 3 types - 'general', 'transcode' and 'health check'. Worker limits can be managed by the scheduler as well as manually. For a desktop application with similar functionality please see HBBatchBeast. - FFmpeg/HandBrake + video health checking (Windows, macOS, Linux & Docker) - Use/create Tdarr Plugins for infinite control on how your files are processed:https://github.com/HaveAGitGat/Tdarr_Plugins - Audio and video library management - 7 day, 24-hour scheduler - Folder watcher - Worker stall detector - Load balancing between libraries/drives - Tested on a 180,000 file dummy library with 60 workers - Search for files based on hundreds of properties - Expanding stats page Currently in Alpha but looking for feedback/suggestions. unRAID setup screenshots below: Usage examples: https://gfycat.com/cooperativetestyjaguarundi https://gfycat.com/uneveninconsequentialguernseycow For unRAID please see the following screenshots for the MongoDB and Tdarr container configs: 2019-12-16: Please note I'm on a 2 week break. Sorry for the inconvenience.
    1 point
  2. This is a continuation of the plugin written by bshakil to hot plug USB devices in a VM. It has been updated for Unraid 6.5 due to some file structure changes. You will need to remove the previous plugin before installing this one. Installing the plugin You can install the plugin from Community Applications or by pasting the following link in the Install Plugin tab on the Unraid plugins page. https://github.com/dlandon/libvirt.hotplug.usb/raw/master/libvirt.hotplug.usb.plg USB hot plug is now incorporated into Unraid by editing the xml and selecting/unselecting USB devices. This plugin offers an alternative to that method.
    1 point
  3. Oct 12 03:35:17 Storage kernel: tun: unexpected GSO type: 0x0, gso_size 1357, hdr_len 1411 Oct 12 03:35:17 Storage kernel: tun: 13 e4 3d f7 10 86 b8 9e 87 b1 5f 81 d9 7a 98 c9 ..=......._..z.. Oct 12 03:35:17 Storage kernel: tun: 26 fa 2d 78 50 03 f2 b2 22 55 bc 68 29 75 83 46 &.-xP..."U.h)u.F Oct 12 03:35:17 Storage kernel: tun: 04 35 d4 e4 71 d8 5c 04 e3 e2 a2 6d 4e 1f 22 9d .5..q.\....mN.". Oct 12 03:35:17 Storage kernel: tun: 6f 97 72 60 c9 63 2b dc f4 ec c7 4f 68 60 66 9e o.r`.c+....Oh`f. Getting the above message repeated over and over again in the log whenever a docker tries to access the NIC. storage-diagnostics-20191012-0237.zip
    1 point
  4. Do you want to stay informed about all things Unraid? Sign up for our monthly newsletter and you'll receive a concise digest of new blog posts, popular forum posts, product/company announcements, videos and more! https://unraid.net/newsletter Cheers
    1 point
  5. The macinabox template uses custom ovmf files. If you change <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e930dfa3-ce5f-4a14-a642-d140ed8035bd_VARS-pure-efi.fd</nvram> </os> to be as below you will be back without the screen corruption. <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> hope that helps 😀
    1 point
  6. Hi Nick, You need to setup the tdarr_aio container to use hardware transcoding. There’s not currently a CA template so you need to enable searching Docker Hub in your UnRaid settings. Please see this for container settings: https://github.com/HaveAGitGat/Tdarr/wiki/2---Installation The settings are very similar to the tdarr container which you have installed at the moment. There will be a CA template for tdarr_aio later today.
    1 point
  7. You only need it for private shares. If you set the Security option to Private a Rule box appears into which you need to enter the code. Public shares are easier. I'd experiment with those first. On the client you mount an NFS share using the mount command like this: mount -t nfs tower:/mnt/user/name-of-share /mnt/mount-point which is similar to how you would mount an SMB share. Note that you have to specify the full path to the mount (i.e. tower:/name-of-share wouldn't work) and /mnt/mount-point must already exist on the client. To unmount the share you use either umount tower:/mnt/user/name-of-share or umount /mnt/mount-point This information is summarised here: https://linuxize.com/post/how-to-mount-an-nfs-share-in-linux/
    1 point
  8. Probably, I saw some older read errors on parity2 on the first diags, but since they were old (a year or so ago) didn't mentioned them, but since there are more now, and they are a disk problem, you should replace it, it's even possibly the disk was causing the sync errors, though disks shouldn't return wrong data, it's been known to happen.
    1 point
  9. Unfortunately, fluency in both languages is required. It would be very bad if a google translate caused a major misunderstanding of a basic function. Proper technology translation is hard, and the consequence of getting it wrong could mean that someone loses data by following approved instructions.
    1 point
  10. OTOH, I would guess that what Krusader is doing is moving the file to it's recycle bin (which is probably stored within the docker.img file). I believe you can disable the recycle bin option in krusader
    1 point
  11. No errors so far, which is good. I'd let it run for 24 hours or more.
    1 point
  12. The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on
    1 point
  13. ¿sıɥʇ ʇɹoddns ʇı llıʍ uǝɥʍ ʇnq
    1 point
  14. I suspect it would take major re-engineering of the GUI to make it capable of any sort of multi-language support. Be nice if it ever happened, though.
    1 point
  15. The only way that will happen is if someone fluent in english and chinese languages volunteers their time and coordinates with limetech.
    1 point
  16. 1 point
  17. Nah you can only run Mac vms using Q35. All of us with AMD cards deal with pci error 127. Just Google "AMD reset bug" and you will see it effects all newer AMD cards across a bunch of Linux distros. As of now there is no fix for the RX series of cards.
    1 point
  18. For someone who is (based upon the number of posts) a new user, and presumably only familiar with a windows system where drive letters are static its not an unreasonable question.
    1 point