BBenja

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by BBenja

  1. Hi all,

     

    just updated to 6.12.2. After the reboot, my cache drives (formated btrfs) show 'Unmountable: Unsupported or no file system'. (there is also a warning '... references non existent pool cache')

     

    Please find my Diagnostics file attached. What can I do to fix the issue and prevent any data loss?

     

    Thanks in advance and all the help and great work!

    mule-diagnostics-20230705-1125.zip

  2. Edit: I think audiobookshelf is close enough to what I am looking for.

     

     

    Hi all,

    currently I archive podcasts from multiple sources on my server and use plex to listen to them. While I love plexamp for music, it isn't designed to be a podcast app.

    Therefore, I would like to create a local rss feed of all my podcasts in a given directory (so basically, one rss feed per folder) and update it whenever a new episode is added (or once a day - if more feasible). Then I could use a podcast app on IOS to load these rss feeds and have a better management of my podcast.

     

    1) Is this procedure possible? Or am I missing something?

    2) Do you have any advice for a good docker/app/script to create such feeds?

    (I had a look at freshrss and rsshub - but I couldnt find a way to create a rss feeds from local files)

     

    Ofc, I am also open for an alternative approach for solving 'this problem'!

     

    Thanks for any advice.

  3. 1 hour ago, jsavargas said:

     

    Has it stopped working after upgrading to 6.11.2?

     

     

    I can confirm that something changed after the update.

    I am not using borgbackup but rely on other scripts using libffi.so.7. Since I updated 2 hours ago, I get the following error

    ImportError: libffi.so.7: cannot open shared object file: No such file or directory

     

  4. 19 hours ago, trurl said:

    I had to remove this plugin because syslog was getting spammed with these

    ...

     

    Did you also remove the SSD trim plugin? Can't install it via link or file

    Always get the same error as some people stated above

     

     

    edit: never mind, just saw that the plugin is now built-in

    thanks for all your work!

     

  5. 13 minutes ago, JorgeB said:

    In the syslog it's log as the disk problem, but since the SMART test passed you can rebuild on top, make sure the emulated disk is still mounting before doing it, also a good idea to swap/replace cables before to rule them out if it happens again to the same disk.

     

    Thank you so much for the fast help. I will do so.

    The rebuilding works in the same way as for a replaced disc, doesn't it?

     

    Quote

    There are also what look like connection issues with both cache devices, though strange that it's with both, unless they share a power splitter or something.

     

    Hm, I don't think so. I will double check and replace any necessary cable. Hopefully, that solves the issue.

    Does it make sense to run and post diagnostics after the whole changing and rebuilding?

  6. Dear all,

    yesterday unraid short Disk 2 in error state for one of my hard drives. Now there is a red cross next to the drive and the Error count next to the hard drive shows 33 (it was 0 previously).

    I run an extended smart self-test which took almost a day and shows 'completed without error'. I can attach it if needed, for now I 'only' post the diagnostics as suggested on the help page.

    Also since then unraid showed a yellow and even a green message about the drive (see screenshot attached).

     

    I didn't move my server, didn't even touch it. There was a parity check running (once a month automatically) but when the error message first occurred it was actually paused.

     

    Could you tell me what is going on? Thank you very much, any help is much appreciated!

    2022-09-01 09_26_02-Mule_Device - Chromium.png

    mule-diagnostics-20220901-0921.zip

  7. Dear all,

    I am aware, that this topic was already discussed multiple times. But I am still unsure which approach is the 'best'.

    I would like to access Plex from outside of my network. Mainly for listing to music.

     

    After going through a lot of threads, I found mainly the following suggestions

    - Don't do it

    - Just enable it in Plex itself

    - Use a reverse proxy

     

    Could anyone help me picking the best approach?

    Again, I would like to access my Plex media. I have no access to anything else on my server from outside of my network (and I would like to keep it that way).

     

    In addition, I read it makes sense to put Plex Docker on a different VLan + give the Docker only access to the media shares. Is that still suggested and if yes, how can I do that? (+ any additional tricks to harden security?).

     

    Thank you in advance.

    Best,

  8. I use photoprism with my own mariadb and is seems to work fine.

    However, now i have to import (I want to restructure the data, therefore I use import rather than indexing) my pictures (about 60.000 files).

    Two questions, is it normal to take very long? (run almost a day and I hardly imported 10%)

    Secondly, it seems that the import is only running when I have the GUI open. Whenever I close it, the import stops after a couple of minutes - at least the logs indicate it.

  9. Dear Unraid Community,

     

    I bought a new 16 TB to replace my parity drive and used the old parity drive to replace a smaller data drive. To do so, I followed the parity swap procedure as described in the unraid wiki.

    However, I made the mistake of not carefully reading the instructions before starting the procedure (my bad!). I used the unbalance plugin to empty the old data drive and move the data to the other drives.

    Then I followed the instructions step by step. Currently, I wait for the parity to be copied. The next step would be the data drive rebuild. However, since the data drive is already empty, the rebuild would waste a lot of time/reads/resources to rebuild an empty drive.

    Do I still need to run this step now, or is there an alternative way?

    Second question: after the whole procedure is completed, should I move the data again using unbalance, to not have some drives fully filled, or is that not necessary.

     

    Thank you for your help.

    Best,

  10. Previously everything worked great. Now I changed the share (copied the calibre library to a new share) and changed the docker template accordingly. Unfortunately, now I always get an error message if I want to convert epubs to mobi.

    I read that it is a permission problem, but I don't know how to fix it. Already run the unraid NewPerms command (in the WebUI), but that didn't help.

    WOuld appreciate some tipps. Thank you :)

  11. 38 minutes ago, trurl said:

    like your cache also filled at some point but wasn't full when the diagnostics were taken.

     

    Looks like plex was OOMed at some point though it isn't clear it is the culprit. How much RAM are you giving to VMs?

     

    Why do you have your system share (and so docker.img and libvirt.img) on the array? These will keep array disks spunup since they are always open.

    Thanks for your answer.

    Yes, cache filled up because I transfered a lot of data from my computer to the unraid server (and didn't disable cache / didnt enable fast read).

    About the VM: I created a VM for testing purposes, but it was disabled ever since. So I though it doesn't effect anything.

  12. Hello,

    the last two days, I got the error 'Your server has run out of memory, and processes (potentially required) are being killed off. You should post your diagnostics and ask for assistance on the unRaid forums'  twice.

    Therefore, I thought it is best to ask here.

     

    Some background: my existing drives were very low on space, I added a new one and used the unbalance script to move some data.

    Other then that, I didn't make any changes.

     

    Thanks in advance!

    mule-diagnostics-20201221-0853.zip

  13. On 11/26/2019 at 10:13 PM, deusxanime said:

    Just having a VPN isn't quite enough. What you need is a proxy to direct it through. If you are using binhex's DelugeVPN container, I believe he usually includes the Privoxy proxy host as part of that. If so, you should be able to go into JDownloader's Settings and look under Connection Manager. You'll want to add the Privoxy proxy there using your unRAID server's host/IP address (or the DelugeVPN container's IP if you have it set to use a different one from the unRAID server, which is uncommon) and port 8118, with user and password left blank. Also make sure to uncheck the No Proxy line so it doesn't try to use your non-VPN'ed internet connection. Once you have that set you should be good to go. 

     

    Just to add on - to double-check you are using your proxy, look in the Connection column of your Downloads when something is running. I think it will say in there what IP address you are using to download (might have to hover over the icons to see in a tool tip popup). Verify that is different from your normal IP you are using for your home connection. If you don't know how to check your home connection's IP, just open a browser on your home system and go to Google and search "what is my ip" and it will tell you.

    Sorry to pick up on this old post.

    So I route my traffic through the DelugeVPN container AND add the proxy in jdownloader 2 (http, IP from server, port 8118).

    Unfortunately it doesn't show any IP when I hover over the the connection tab (it shows only ''Connection (Resizing locked)"). Is there any other way to findout the IP?

  14. First of all thank you for all the effort!

    I use delugeVPN for quite some time now and it worked well. But now I can't conntect to the webUI anymore.

    I only added a new port to the docker (to route the traffice from a different docker). Even though I reversed the change, I stil lcan't connect to the webui. (I use airvpn by the way)

    The log file shows two things. One is just a DEPRECATED message, but the other is the following error:

     

    2020-11-29 16:49:03,596 DEBG 'start-script' stdout output:
    Options error: --explicit-exit-notify can only be used with --proto udp
    Use --help for more information.

     

     

    And full log, before it repeats:

    2020-11-29 16:49:03,632 DEBG 'start-script' stdout output:
    [info] Starting OpenVPN (non daemonised)...
    
    2020-11-29 16:49:03,637 DEBG 'start-script' stdout output:
    2020-11-29 16:49:03 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-256-CBC' to --data-ciphers or change --cipher 'AES-256-CBC' to --data-ciphers-fallback 'AES-256-CBC' to silence this warning.
    
    2020-11-29 16:49:03,638 DEBG 'start-script' stdout output:
    Options error: --explicit-exit-notify can only be used with --proto udp
    Use --help for more information.

     

  15. Dear all,

     

    I use gPodder to download podcasts and airsonic to organize them.

    In general that works very well. Unfortunately, gPodder is only able to use existing metadata and rename downloaded audiofiles to: publishing date - name of the podcast.

    Of course airsonic can add a tracknumber, however that needs to be done manually and somehow you can't order the list by filename first.

     

    Is there a easy and automatic way to add a tracknumber to the podcast files?

    Manually I would order them by filename and then start enumarting from one.

     

    Thanks in advance!