Jump to content

murkus

Members
  • Posts

    87
  • Joined

Posts posted by murkus

  1. 8 hours ago, dlandon said:

    What I was looking for were the device designations.  I'm pretty sure there are invalid characters in the device designations.

    I dont know what you mean by device designators, but the export path/filenames only contain letters, number, dashes and underscores, nothing else.

     

    I wouldn't know why this should not be acceptable names and renaming them would be quite an effort as they are mounted by a lot of other machines, too.

  2. I had given up on it half a year ago, after upgrading to 6.11. it was broken, so I thought. I gave it a shot now with 6.12.2 with what I think made the substantial difference: I now use a "disk share" instead of a regular "user share". Disk shares may not yet be enabled on your unraid installation and this can be done in "Global share settings". The disk share has a size limit for timemachine. I don't see any substantial drawbacks of using a disk share as compared to a user share, if you plan to have you macOS backups all on one disk anyway. The SHFS user file system turns out to come with a lot of performance overhead and I have stopped using it for any shares for backup solutions.

     

    I am using the unraid TimeMachine that is included with the OS, not the TimeMachine container. I had tried the container in the past and that was also not successful for me. I removed all custom tweaks for SMB, I am using the stock SMB config that 6.12 comes with.

     

    TimeMachine now is not perfect, but it works: The inital backup stopped like 4-5 times and I had to restart it. The whole process took 2-3 days until the initial backup did conclude successfully. Starting new differential backups works fine.

     

    I don't know yet whether the verification will be successful, as it is currently running. If it fails I will provide an update here.

    • Thanks 1
  3. I had given up on it half a year ago, after upgrading to 6.11. it was broken, so I thought. I gave it a shot now with 6.12.2 with what I think made the substantial difference: I now use a "disk share" instead of a regular "user share". Disk shares may not yet be enabled on your unraid installation and this can be done in "Global share settings". The disk share has a size limit for timemachine. I don't see any substantial drawbacks of using a disk share as compared to a user share, if you plan to have you macOS backups all on one disk anyway. The SHFS user file system turns out to come with a lot of performance overhead and I have stopped using it for any shares for backup solutions.

     

    I am using the unraid TimeMachine that is included with the OS, not the TimeMachine container. I had tried the container in the past and that was also not successful for me. I removed all custom tweaks for SMB, I am using the stock SMB config that 6.12 comes with.

     

    TimeMachine now is not perfect, but it works: The inital backup stopped like 4-5 times and I had to restart it. The whole process took 2-3 days until the initial backup did conclude successfully. Starting new differential backups works fine.

     

    I don't know yet whether the verification will be successful, as it is currently running. If it fails I will provide an update here.

  4. I had given up on it half a year ago, after upgrading to 6.11. it was broken, so I thought. I gave it a shot now with 6.12.2 with what I think made the substantial difference: I now use a "disk share" instead of a regular "user share". Disk shares may not yet be enabled on your unraid installation and this can be done in "Global share settings". The disk share has a size limit for timemachine. I don't see any substantial drawbacks of using a disk share as compared to a user share, if you plan to have you macOS backups all on one disk anyway. The SHFS user file system turns out to come with a lot of performance overhead and I have stopped using it for any shares for backup solutions.

     

    I am using the unraid TimeMachine that is included with the OS, not the TimeMachine container. I had tried the container in the past and that was also not successful for me. I removed all custom tweaks for SMB, I am using the stock SMB config that 6.12 comes with.

     

    TimeMachine now is not perfect, but it works: The inital backup stopped like 4-5 times and I had to restart it. The whole process took 2-3 days until the initial backup did conclude successfully. Starting new differential backups works fine.

     

    I don't know yet whether the verification will be successful, as it is currently running. If it fails I will provide an update here.

  5. I had given up on it half a year ago, after upgrading to 6.11. it was broken, so I thought. I gave it a shot now with 6.12.2 with what I think made the substantial difference: I now use a "disk share" instead of a regular "user share". Disk shares may not yet be enabled on your unraid installation and this can be done in "Global share settings". The disk share has a size limit for timemachine. I don't see any substantial drawbacks of using a disk share as compared to a user share, if you plan to have you macOS backups all on one disk anyway. The SHFS user file system turns out to come with a lot of performance overhead and I have stopped using it for any shares for backup solutions.

     

    I am using the unraid TimeMachine that is included with the OS, not the TimeMachine container. I had tried the container in the past and that was also not successful for me.

     

    TimeMachine now is not perfect, but it works: The inital backup stopped like 4-5 times and I had to restart it. The whole process took 2-3 days until the initial backup did conclude successfully. Starting new differential backups works fine.

     

    I don't know yet whether the verification will be successful, as it is currently running. If it fails I will provide an update here.

  6. I had given up on it half a year ago, after upgrading to 6.11. it was broken, so I thought. I gave it a shot now with 6.12.2 with what I think made the substantial difference: I now use a "disk share" instead of a regular "user share". Disk shares may not yet be enabled on your unraid installation and this can be done in "Global share settings". The disk share has a size limit for timemachine. I don't see any substantial drawbacks of using a disk share as compared to a user share, if you plan to have you macOS backups all on one disk anyway. The SHFS user file system turns out to come with a lot of performance overhead and I have stopped using it for any shares for backup solutions.

     

    I am using the unraid TimeMachine that is included with the OS, not the TimeMachine container. I had tried the container in the past and that was also not successful for me.

     

    TimeMachine now is not perfect, but it works: The inital backup stopped like 4-5 times and I had to restart it. The whole process took 2-3 days until the initial backup did conclude successfully. Starting new differential backups works fine.

     

    I don't know yet whether the verification will be successful, as it is currently running. If it fails I will provide an update here.

     

  7. from the initial post it seems that unraid is the nfs server and some debian 11 as nfs client. however - and this is often overlooked - when stat and locks are used, the nfs server (here unraid) will want to send data to statd and lockd on the nfs client, here the debian 11. i.e. you need to fix the port numbers on the debian 11 and then allow them in the host firewall i.e. for unraid to send to rpc, statd and lockd. to be explicit: even if unraid is the nfs server, the nfs client als has server roles for stat and lock and you need to consider this for your firewall design.

     

    I am not sure that this is the problem here, but as this stuff is often overlooked I mention it, just in case that it may be the cause that unraid cannot reach statd and lockd on the debian 11 client.

  8. I have not tried to use NFS for VDIs, but as a "Remote", which is what is a share for backups in XCP-ng.

    The backup writing works, but the health checks fail in the most cases with:

    Error: SR_BACKEND_FAILURE_47(, The SR is not available [opterr=[Errno 116] Stale file handle

     

    I have not idea why this happens. If I remove the Remote and re-create it, the back works with health check for once. Next time when scheduled the problem re-appears.

     

    If someone has a solution, please step forward.

  9. I am going through a very frustrating moment because of the way CA backup is ordering the stopping and starting of containers. normally, as I understood unraid, for starting the order of the containers in the UI list is used. And of course this has a deeper meaning, if you have container dependencies.

     

    Why the '*/%§§ is CA backup using an alphabetical order, which is of course not the same as in the UI container list.

     

    This caused that Mariadb was stopped before the depending application and so the database was corrupted.

     

    I would strongly mandate that CA backup should use gthe same order as unraid does for starting containers, and the inverse order for stopping the containers (i.e. the same order that unraid uses to stop the containers). Anything else will lead to disaster for some users!

     

     

     

  10. 16 minutes ago, daredoes said:

    Yeah this is being developed for me first and foremost. I love making it more accessible for others though. 

    If you need a 1:1 troubleshooting session just lmk. Probably won't take more than 15 minutes to solve. 

    I've been considering doing live coding on Twitch or something so others can learn and ask questions as I go. Let me know if that's of any interest as well (just for my own research).

     

    I see, thanks for the offer. I'll try to fix it on my own, as I don't use twitch and I don't intend to join that. But if it helps I woul dmeet you on Matrix or Discord.

     

    The Library scan button seems to work. I found a corresponding log in the tmp folder where the fifos are.

  11. 6 minutes ago, daredoes said:

    @murkus 

    Give both a force update. I've made some good changes, but I think Airplay is still broken for snapcast at the moment. Were you even using it though? Doesn't affect you if not. 

     

    For the server.json you'll want to place it in `appdata/Mopidy3` alongside `mopidy.conf`

    There is a line in `mopidy.conf` which is VERY IMPORTANT and its `

    output = audioresample ! audioconvert ! audio/x-raw,rate=44100,channels=2,format=S16LE ! filesink location=/tmp/snapfifo

     

    The mopidy server when booted up creates a copy of this configuration and does some updates to it, including replacing `/tmp/snapfifo` with `/tmp/snapfifo{A_REALLY_LONG_UNIQUE_HASH}`.

    This works together with `servers.json` to allow you to have one docker image create multiple instances of Mopidy3 that can rely upon the same cache/data/etc

    In the attached code block which is my scrubbed `servers.json` I create two instances of Mopidy3, "Home" and "Ambience". I also provide information to reach my snapcast server (well not actually mine). This is used in boot up, and automatically put into the Iris configuration.

    This data is used to create a supervisord configuration for each desired instance of Mopidy3. So in this case it creates two programs, one for Home and one for Ambience.

    Those programs run a python program before starting up that reaches out to the snapcast server to clear any stream that has the same name, and then add in this stream as a pipe pointing to `"f"pipe:///data/snapfifo{hash}?name={name}&sampleformat={sample_format}&send_to_muted=false&controlscript=meta_mopidy.py"`. Important note, you want to have Mopidy3 and Snapcast running on the same unraid instance with a shared tmp folder. For the extra instance, manually edit the docker config for Mopidy to expose the new MPD and HTTP ports over TCP.

    {
        "servers": {
            "Home": {
                "mpd": 6600,
                "http": 6680
            },
            "Ambience": {
                "mpd": 6601,
                "http": 6681
            }
        },
        "snapcast": {
            "host": "snapcast.example.com,
            "port": 443,
            "use_ssl": true,
            "enabled": true
        }
    }

     

    All of this code is public at github.com/daredoes/docker-snapcast or github.com/daredoes/docker-mopidy

     

    OK OK OK, this explains why shit doesnt work well...

     

    I did actually change the name of the fifo and created four of them in the mopidy config manually. Of course everything is jumbled up now.

     

     

     

  12. you wrote: "Did you know that the Mopidy app has the ability to dynamically create multiple instances AND automatically add and remove that instance as a stream source in snapcast over HTTP?"

     

    That sounds great! I would like to understand how I may use that.

     

  13. I did not have source = in the config file. Thanks for the example file, it is more self-explanatory. I will use that and edit it to include my old config settings

     

    Also edited the server.json and put it in the config path on appdata. it seems that it is read when restarting the server.

     

    I still get tons of these:

    [Notice] (handleAccept) ControlServer::NewConnection: <ip>

    Error] (cleanup) Removing 1 inactive session(s), active sessions: 2

     

    I also clicked on Settings > Scan Library and it displays a pop-up saying "Scanning local library" with a spinner. I don't know whether it is effectively scanning the library, though.

     

  14. I installed the update of today (both of your snapcast and mopidy3 containers). Now it doesn't work fully  any more. the SnapWeb UI says: The resource '/' was not found.

     

    What did you change? What do I need to change in my config?

     

    In the meanwhile I would love to go back to the previous version for both containers, but you seem to have removed them. There is only "latest" and no other tags... frustrating.

     

    I suggest you give every version of your containers a unique tags and let the "latest" tag point to the tag of the latest version, so that people can go back to an older version if they have problems with the latest container.

  15. 20 hours ago, dlandon said:

    So I am able to reproduce the issue.  UD is not making any changes,  but now the default seems to be to share, which it is not supposed to do.  The default happens when a device is set up and the 'Share' switch is never changed.  None of the recent changes should have altered this, but apparently something changed.  I'll issue an update sometime today or tomorrow since this is a security issue.  It's a security issue because remote shares will be shared without user intervention, as you found out.

    I installed your update and then it happened again. Not sure if your update really fixes this...

  16. 2 hours ago, dlandon said:

    So I am able to reproduce the issue.  UD is not making any changes,  but now the default seems to be to share, which it is not supposed to do.  The default happens when a device is set up and the 'Share' switch is never changed.  None of the recent changes should have altered this, but apparently something changed.  I'll issue an update sometime today or tomorrow since this is a security issue.  It's a security issue because remote shares will be shared without user intervention, as you found out.

    correct. the share switches were "off", still the sharing did happen. so this is consistent with what you found.

     

  17. today I had the occurrence that the director process died. no information on that in the bacula.log or system log. so I restarted the container. naturally a lot of the processes that were waiting were displayed to be in error.

     

    So I clicked the restart button on them in the history list in relatively quick succession. The director died again. I can reproduce this, if I restart - say 5 or more - jobs from the history without waiting for each restart button spinner to finish, the director will die and some bconsole processes will complain.

     

    If I wait for the spinner to finish before I restart the next job, the director will stay alive.

     

    I would be interested whether anybody else can reproduce this behavior.

     

×
×
  • Create New...