Jump to content

Joseph

Members
  • Posts

    444
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by Joseph

  1. n00b to paper-ngx here 🙋‍♂️ ... I'm hoping someone can point me in the right direction to determine why the docker instance is consuming 10TB of the unRaid docker image and maxing out the image file. I have all user customizable fields pointed to places outside the docker image, but I must be overlooking something. I don't have tika, gotenberg installed; just redis. The gui launches and it seems to be working ok, just running out of docker image space. Thank you.

    Screenshot 2024-02-25 at 4.21.24 PM.png

    • Upvote 1
  2. cool app, thanks! I might have overlooked something, but I didn't see a way to stop playback like you can in the Plex app dashboard. If the feature isn't there today, it would be great if a future app had that. just my $.02 :)

     

  3. 58 minutes ago, wgstarks said:

    I know I’m hearing a lot of complaints about System Prefs changing to Settings and it’s hard to find the right setting without having to run a search.

    I hate the new system prefs too, but I deal with it. Another petty annoyance is they changed "Enter Time Machine" to "Browse Time Machine Backups" I want to enter a time machine!

    • Haha 1
  4. 42 minutes ago, joshbgosh10592 said:

    Good luck. I've reinstalled the docker container and macOS, checked permissions, everything mentioned in these forums and I've never gotten more than the first backup to run successfully. I've given up.

     

    Thanks man... the TL;DR is that it's been working so far (🤞)  on a Mac that I recently upgraded to Ventura. So I might end up going that route.

  5. 2 hours ago, wgstarks said:

    All you should need to do is run the sudo command on that share.

    I removed the local TM and deleted the unRaid share & had the docker re-create it again. While trying to initiate the first backup, I stumbled onto an issue that makes me think the Mac might need a reboot after the initial backup. I'm waiting for it complete and will try that too.

  6. 1 hour ago, wgstarks said:

    Those settings aren't affected. You shouldn't notice any change at all.

     

    Thanks Guys for the clarification. I'm going to try a couple of more things before going that route... meanwhile, I found this on the inter webs to produced a TM log file: log stream --predicate 'subsystem == "com.apple.TimeMachine"' --info --debug

     

    Starting manual backup
    Attempting to mount 'smb://TM_MacPro@timemachine._smb._tcp.local./TM_MacPro_SMB'
    Mounted 'smb://TM_MacPro@timemachine._smb._tcp.local./TM_MacPro_SMB' at '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB'
    Initial network volume parameters for 'TM_MacPro_SMB' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 60, QoS: 0x0, attributes: 0x1C}
    Configured network volume parameters for 'TM_MacPro_SMB' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C}
    Found matching sparsebundle 'MacPro.sparsebundle' with host UUID 'D2487E77-1697-5776-BE5A-28C60EEFC234' and MAC address '(null)'
    Not performing periodic backup verification: not needed for an APFS sparsebundle
    MacPro.sparsebundle' does not need resizing - current logical size is 3.8 TB (3,798,891,797,504 bytes), size limit is 3.8 TB (3,798,891,797,913 bytes)
    Mountpoint '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB' is still valid
    Checking for runtime corruption on '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB/MacPro.sparsebundle'
    Mountpoint '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB' is still valid
    Runtime corruption check passed for '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB/MacPro.sparsebundle'
    Stopping backup because volume '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB' was unmounted.
    Backup cancel was requested.
    Failed to attach to '/Volumes/.timemachine/timemachine._smb._tcp.local./0F0C58C8-0505-4FDE-BB64-4FF393375FB7/TM_MacPro_SMB/MacPro.sparsebundle', error: 112 no mountable file systems
    Waiting 60 seconds and trying again.
    Backup reporting that it needs to be cancelled
    Backup canceled (22: BACKUP_CANCELED)

    looks like the TM share mounts ok, then gets knocked offline for some unknown reason... thoughts?

  7. 13 hours ago, wgstarks said:

    A re-install is implemented differently from an update but from a users perspective it's exactly the same except that it may take slightly longer. All files are being updated rather than just the changed files.

    I know, but how reliable is a local time machine backup... or any time machine backup to get my user data & system settings back? 😬

  8. 2 hours ago, wgstarks said:

    You might also try just using an unraid share for TM backups.

    that thought has crossed my mind, along with start completely over and only create a backup to unRAID first. After verifying that it is executing reliable incremental backups, then create a new local backup and see what happens.

  9. On 11/21/2022 at 5:58 PM, wgstarks said:

    If so, you should probably setup separate shares for the two machines.

    https://forums.unraid.net/topic/123985-timemachine-application-support-thread/?do=findComment&comment=1134386

     

    UPDATE 1: so I changed the owner on both shares to nobody and group to 1000 set the perms to 777 and tried the Big Sur TM again. The first backup completed, however subsequent backups still fail... so SS/DD! :(

     

    UPDATE 2: Using the same perms & owner as above, TM on the Mac running Ventura seems to be just fine... so both are behaving just as before; Ventura works, Big Sur does not. Unless I borked the setup/install of the docker (which is within the realm of possibilities), I'm leaning toward its a limitation with Big Sur itself or maybe has something to do with having a local backup on the Big Sur Mac as well.

  10. On 11/21/2022 at 5:58 PM, wgstarks said:

    If so, you should probably setup separate shares for the two machines.

    https://forums.unraid.net/topic/123985-timemachine-application-support-thread/?do=findComment&comment=1134386

     

    So I followed the instructions to create 2 different users (and different share names). I changed the UID and GID for each share name and started a backup, but it didn't complete, and has stopped working altogether. Both shares before chown: UID=timemachine; GID=1000:

     

    I think I did this correctly?:

    * sudo chown -R TM_MP:1001/mnt/user/TM_MP_SMB/

    * sudo chown -R TM_MBP:1002/mnt/user/TM_MBP_SMB/

     

    When I check the results of each share I get:

    * TM_MP_SMB UID !=timemachine or TM_MP; GID=1000

    * TM_MBP_SMB UID !=timemachine or TM_MBP; GID=1000

     

    So there's something wrong with the both share owners and the group is set to 1000.

     

    Can you let me know they should be for each share and how the permissions should be set? My guess is owner should either be timemachine and group 1000 for both or it should be set to their respective UIDs & GIDs; but that's not what I have.

     

    Thanks in advance!

  11. 16 hours ago, wgstarks said:

    If so, you should probably setup separate shares for the two machines.

    ok, will try that, thank you!

     

    Quote

    FYI, if you click this (or any) docker run link it shows you how to get the data.

    Doh! Thanks... here it is:

     

    docker run
      -d
      --name='TimeMachine'
      --net='br0'
      -e TZ="Europe/Berlin"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Backup"
      -e HOST_CONTAINERNAME="TimeMachine"
      -e 'VOLUME_SIZE_LIMIT'='6 T'
      -e 'TM_USERNAME'='timemachine'
      -e 'PASSWORD'='[REMOVED]'
      -e 'ADVERTISED_HOSTNAME'='timemachine'
      -e 'CUSTOM_SMB_CONF'='false'
      -e 'CUSTOM_USER'='false'
      -e 'DEBUG_LEVEL'='1'
      -e 'MIMIC_MODEL'='TimeCapsule8,119'
      -e 'EXTERNAL_CONF'=''
      -e 'HIDE_SHARES'='no'
      -e 'TM_GROUPNAME'='timemachine'
      -e 'TM_UID'='1000'
      -e 'SET_PERMISSIONS'='false'
      -e 'SMB_INHERIT_PERMISSIONS'='no'
      -e 'SMB_NFS_ACES'='yes'
      -e 'SMB_METADATA'='stream'
      -e 'SMB_PORT'='445'
      -e 'SMB_VFS_OBJECTS'='acl_xattr fruit streams_xattr'
      -e 'WORKGROUP'='WORKGROUP'
      -e 'TM_GID'='1000'
      -e 'SHARE_NAME'='TimeMachine'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.icon='https://upload.wikimedia.org/wikipedia/de/f/f4/Time_Machine_%28Apple%29_Logo.png'
      -v '/mnt/user/timemachine/':'/opt/timemachine':'rw'
      --hostname timemachine 'mbentley/timemachine' 
    6ca23edde97fa1d05724abb35e0e3af9dc0a12a885b53d04c817598540f342bf

     

     

    UPDATE 1: Ok, so blew away the TM data/share and reinstalled the app to set up 2 shares per your detailed & amazing instructions. The first one (Big Sur) just started... will keep you posted. Thanks again!

  12. Hoping someone might be able to help a n00b with just enough knowledge to hurt himself...

     

    I have 2 Macs using this plug in:

     

    Both share the same TM login/instance of the docker.

    Mac1 (Ventura), TM completes the initial backup to unRAID just fine as well as perform hourly backups

    Mac2 (Big Sur), TM completes the initial backup to unRAID, but will not make subsequent backups. Note: it also successfully backs up to a local TM external drive.

     

    Not sure what the command is to list the docker run info, but here is the log file:

     

    text  error  warn  system  array  login  
    
    Successfully dropped root privileges.
    avahi-daemon 0.8 starting up.
    WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
    dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused
    WARNING: Failed to contact D-Bus daemon.
    avahi-daemon 0.8 exiting.
    dbus-daemon[31]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted
    Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
    Successfully dropped root privileges.
    avahi-daemon 0.8 starting up.
    WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
    Loading service file /etc/avahi/services/smbd.service.
    Joining mDNS multicast group on interface eth0.IPv4 with address x.x.x.2.
    New relevant interface eth0.IPv4 for mDNS.
    Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
    New relevant interface lo.IPv4 for mDNS.
    Network interface enumeration completed.
    Registering new address record for x.x.x.2 on eth0.IPv4.
    Registering new address record for 127.0.0.1 on lo.IPv4.
    Server startup complete. Host name is timemachine.local. Local service cookie is 251226102.
    Service "timemachine" (/etc/avahi/services/smbd.service) successfully established.
    Got SIGTERM, quitting.
    Leaving mDNS multicast group on interface eth0.IPv4 with address x.x.x.2.
    Leaving mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
    avahi-daemon 0.8 exiting.
    Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
    Successfully dropped root privileges.
    avahi-daemon 0.8 starting up.
    WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
    dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused
    WARNING: Failed to contact D-Bus daemon.
    avahi-daemon 0.8 exiting.
    dbus-daemon[31]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted
    Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
    Successfully dropped root privileges.
    avahi-daemon 0.8 starting up.
    WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
    Loading service file /etc/avahi/services/smbd.service.
    Joining mDNS multicast group on interface eth0.IPv4 with address x.x.x.2.
    New relevant interface eth0.IPv4 for mDNS.
    Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
    New relevant interface lo.IPv4 for mDNS.
    Network interface enumeration completed.
    Registering new address record for x.x.x.2 on eth0.IPv4.
    Registering new address record for 127.0.0.1 on lo.IPv4.
    Server startup complete. Host name is timemachine.local. Local service cookie is 2018396226.
    Service "timemachine" (/etc/avahi/services/smbd.service) successfully established.
    nmbd version 4.15.7 started.
    Copyright Andrew Tridgell and the Samba Team 1992-2021
    query_name_response: Multiple (2) responses received for a query on subnet x.x.x.2 for name WORKGROUP<1d>.
    This response was from IP x.x.x.100, reporting an IP address of x.x.x.100.
    smbd version 4.15.7 started.
    Copyright Andrew Tridgell and the Samba Team 1992-2021
    INFO: Profiling support unavailable in this build.
    Failed to fetch record!
    query_name_response: Multiple (2) responses received for a query on subnet x.x.x.2 for name WORKGROUP<1d>.
    This response was from IP x.x.x.100, reporting an IP address of x.x.x.100.
    query_name_response: Multiple (2) responses received for a query on subnet x.x.x.2 for name WORKGROUP<1d>.
    This response was from IP x.x.x.100, reporting an IP address of x.x.x.100.
    query_name_response: Multiple (2) responses received for a query on subnet x.x.x.2 for name WORKGROUP<1d>.
    This response was from IP x.x.x.100, reporting an IP address of x.x.x.100.
    Got SIGTERM: going down...
    Executing .s6-svscan/finish with arguments 
    INFO: CUSTOM_SMB_CONF=false; generating [global] section of /etc/samba/smb.conf...
    INFO: Avahi - generating base configuration in /etc/avahi/services/smbd.service...
    INFO: Avahi - using timemachine as hostname.
    INFO: Avahi - adding the 'dk0', 'TimeMachine' share txt-record to /etc/avahi/services/smbd.service...
    INFO: Group timemachine exists; skipping creation
    INFO: User timemachine exists; skipping creation
    INFO: CUSTOM_SMB_CONF=false; generating [TimeMachine] section of /etc/samba/smb.conf...
    INFO: Samba - Created User timemachine password set to none.
    INFO: Samba - Enabled user timemachine.
    INFO: Samba - setting password
    INFO: SET_PERMISSIONS=false; not setting ownership and permissions for /opt/timemachine
    INFO: Avahi - completing the configuration in /etc/avahi/services/smbd.service...
    INFO: samba-bgqd PID exists; removing...
    removed '/run/samba/samba-bgqd.pid'
    INFO: dbus PID exists; removing...
    removed '/run/dbus/dbus.pid'
    INFO: running test for xattr support on your time machine persistent storage location...
    INFO: xattr test successful - your persistent data store supports xattrs
    INFO: entrypoint complete; executing 's6-svscan /etc/s6'
    nmbd version 4.15.7 started.
    Copyright Andrew Tridgell and the Samba Team 1992-2021
    query_name_response: Multiple (2) responses received for a query on subnet x.x.x.2 for name WORKGROUP<1d>.
    This response was from IP x.x.x.100, reporting an IP address of x.x.x.100.
    smbd version 4.15.7 started.
    Copyright Andrew Tridgell and the Samba Team 1992-2021
    INFO: Profiling support unavailable in this build.
    Failed to fetch record!

     

    I'm out of ideas... thanks guys.

     

     

     

  13. On 11/9/2022 at 4:51 AM, PhilipJFry said:

    Does this update address the SMB Shares issue(s) introduced with 6.11.2 that me and others users have come across @limetech?, or is that in a later version?

    Are you able to downgrade to 6.11.1? I've had terrible SMB issues since the 6.9.x days and regretted updating to 6.10.x as things only got worse. I was quick to update to 6.11.1 after reading that SMB performance was better than it has been and and I can confirm for my gear this is true to an extent. Regardless, after reading some of the comments here, I'm holding off on trying any of the newer 'improvements' at this time.

  14. On 10/26/2022 at 1:15 PM, johnwhicker said:

    After upgrading from 6.9.2 to 6.11.1 my entire Time Machine setup got majorly screwed up. I have 6 Macs backing up to this time machine share and outta of all 6 only the Sierra one continue to work under 6.11.1  All other 5 stopped working. 

    I have the same problem and just found out about this docker; haven't tried it yet... maybe you have?

     

    Update: so far so good on one Mac... backing up another now and will update.

    Update 2: on Mac #2, TM completes the initial backup, but will not make subsequent backups and I'm out of ideas. :(

     

    Mac 1 (working) - Ventura, one TM backup; on unRAID

    Mac 2 (not working) - Big Sur, two TM backups; unRAID and Local

    Both share the same TM login/instance of the docker.

     

    I'll post this info to the support thread and see how that goes.

  15. 6 hours ago, JorgeB said:

    I believe the main issue is if you want to use a Mellanox NIC as eth0 or if you add a Mellanox NIC when running v6.10.x, if it is working with v6.9.x and it's not set as eth0 it should remain working after updating to v6.10.2.

     

    Having said that I suspect v6.10.3 stable is going to be released very soon, so probably best just to wait, or if you want to update now update to v6.10.3-rc1 which should be basically the same as v6.10.3 final.

     

     

    Thanks for the clarification... think I'll wait until the rc becomes a stable release.

  16. On 6/10/2022 at 4:10 PM, limetech said:

     

    Version 6.10.3-rc1 2022-06-10

    Bug fixes

    Fix issue detecting Mellanox NIC.

    ...

    Hi unRAIDers, can anyone say for sure if there is a problem with ALL Mellanox NICs with the 6.10.x update or just certain models? I'm running dual port 10Gbe Mellanox ConnectX-3 Pro NICs and I want to upgrade to the latest stable version (6.10.2), but will wait for the next stable release if it's not working quite right with my NIC setup.

     

    Thanks guys!!

  17. Hi unRAIDERs.... so, this docker safe perms in Fix Common Problems to me is so damn confusing! Running Extended tests gives me:

     

    The following files / folders may not be accessible to the users allowed via each Share's SMB settings. This is often caused by wrong permissions being used on new downloads / copies by CouchPotato, Sonarr, and the like:
    
    /mnt/user/Data/Resources/time95645775.jpg     ME/users (1000/100) 0670
    /mnt/user/Data/Users/RED/Voice Mail.amr       ME/users (1000/100) 0770
    
    Plus ~1300 files with 'incorrect' permissions

     

    These files (and others) were copied from my Mac to the unRAID share. I don't understand why it's best to run a reset to docker safe permissions (which will also reset the user to "nobody") when I can access everything on unRaid without issue. Interestingly, there are many other files that have been added to the array from my Mac that aren't listed in the report... so my guess is it could have something to do with the owner/group settings, or, as I suspect, the level of access granted (i.e. 670 & 770)? Either way what should the proper docker 'safe' level of access be and (if it's doable) I'll CHMOD this from command line with ME/users as the owner & group. If not, I'll hold my nose and run reset to docker safe perms if nobody/users is part of the problem. If I'm completely off base, perhaps someone could help me understand why this is throwing up on the files listed abover?

     

    Thanks Guys in advance to anyone that can ELI5! 🤣

  18. 3 hours ago, Jurak said:

    Does anyone have a nice Heimdall from the marvel universe banner? Not finding any thing that would look good and I'm not the greatest at editing photos.

    Not my best work, but thought I'd upload them for you. Send me a link to the banner so I know what you're looking for

     

    Heimdall (quick) 1.png

     

    Heimdall (quick) 2.png

     

    Heimdall (quick) 3.png

    • Thanks 1
×
×
  • Create New...