schreibman

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by schreibman

  1. Any ideas on what causes this error?

    "[radarr] tar verification failed!"

     

    [21.01.2024 09:48:56][][radarr] tar verification failed! Tar said: tar: from member names; /*stdin*\ : Read error (39) : premature end; tar: Unexpected EOF in archive; tar: Child returned status 1; tar: Error is not recoverable: exiting now

     

     

    	Line 1114: [21.01.2024 09:45:35][debug][radarr] Container got excludes! 
    	Line 1115: /mnt/user/appdata/radarr/MediaCovers/
    	Line 1116: /mnt/user/appdata/radarr/Backups/
    	Line 1117: /mnt/user/appdata/radarr/logs/
    	Line 1118: [21.01.2024 09:45:35][ℹ️][radarr] Calculated volumes to back up: /mnt/user/appdata/radarr
    	Line 1119: [21.01.2024 09:45:35][debug][radarr] Target archive: /mnt/remotes/w/backup/ab_20240121_094432/radarr.tar.zst
    	Line 1120: [21.01.2024 09:45:35][debug][radarr] Generated tar command: --exclude '/mnt/user/appdata/radarr/MediaCovers' --exclude '/mnt/user/appdata/radarr/Backups' --exclude '/mnt/user/appdata/radarr/logs' -c -P -I zstdmt -f '/mnt/remotes/w/backup/ab_20240121_094432/radarr.tar.zst' '/mnt/user/appdata/radarr'
    	Line 1121: [21.01.2024 09:45:35][ℹ️][radarr] Backing up radarr...
    	Line 1122: [21.01.2024 09:48:27][debug][radarr] Tar out: 
    	Line 1123: [21.01.2024 09:48:27][ℹ️][radarr] Backup created without issues
    	Line 1124: [21.01.2024 09:48:27][ℹ️][radarr] Verifying backup...
    	Line 1125: [21.01.2024 09:48:27][debug][radarr] Final verify command: --exclude '/mnt/user/appdata/radarr/MediaCovers' --exclude '/mnt/user/appdata/radarr/Backups' --exclude '/mnt/user/appdata/radarr/logs' --diff -f '/mnt/remotes/w/backup/ab_20240121_094432/radarr.tar.zst' '/mnt/user/appdata/radarr'
    	Line 1126: [21.01.2024 09:48:56][debug][radarr] Tar out: tar: Removing leading `/' from member names; /*stdin*\ : Read error (39) : premature end; tar: Unexpected EOF in archive; tar: Child returned status 1; tar: Error is not recoverable: exiting now
    	Line 1127: [21.01.2024 09:48:56][][radarr] tar verification failed! Tar said: tar: Removing leading `/' from member names; /*stdin*\ : Read error (39) : premature end; tar: Unexpected EOF in archive; tar: Child returned status 1; tar: Error is not recoverable: exiting now
    	Line 1128: [21.01.2024 09:49:04][debug][radarr] lsof(/mnt/user/appdata/radarr)
    	Line 1133: [21.01.2024 09:49:04][debug][radarr] AFTER verify: Array

     

    debug.7z

  2. As indicated, after reboot no pool devices recognized. Causes Docker fail, img/apps on a pool device.

    Reboot/ Poweroff tried.  no luck.

     

    The devices show unmounted in unassigned devices, but fail if I try to mount through the gui: 
     

    Jan 14 12:36:12 Tower unassigned.devices: Error: Device '/dev/sdw1' mount point 'hd36pool' - name is reserved, used in the array or a pool, or by an unassigned device.
    Jan 14 12:36:12 Tower unassigned.devices: Disk with serial 'ST18000NE000-3G6101_ZVT0JX4D', mountpoint 'hd36pool' cannot be mounted.
     
    Jan 14 12:36:18 Tower unassigned.devices: Error: Device '/dev/sdac1' mount point 'hd36pool' - name is reserved, used in the array or a pool, or by an unassigned device.
    Jan 14 12:36:18 Tower unassigned.devices: Disk with serial 'ST18000NM000J-2TV103_ZR59ZJJV', mountpoint 'hd36pool' cannot be mounted.
    Jan 14 12:36:21 Tower unassigned.devices: Error: Device '/dev/nvme1n1p1' mount point 'nvme' - name is reserved, used in the array or a pool, or by an unassigned device.
    Jan 14 12:36:21 Tower unassigned.devices: Disk with serial 'Sabrent_SB-RKT4P-2TB_48790469800627', mountpoint 'nvme' cannot be mounted.
    Jan 14 12:36:26 Tower unassigned.devices: Error: Device '/dev/nvme0n1p1' mount point 'nvme' - name is reserved, used in the array or a pool, or by an unassigned device.
    Jan 14 12:36:26 Tower unassigned.devices: Disk with serial 'Sabrent_SB-R

    Also, can only access GUI 'terminal,  SSH fails 'target machine refuses.'

    any help appreciated / diagnostics of course attached  🙂

    1415635961_Screenshot2024-01-14135057.thumb.png.b27c14f037ca1657c1f0e24fe45f2349.png

    tower-diagnostics-20240114-1349.zip

  3. I'm sure this is user error, but I select

    from:  Disk 1  select  14 TB folder

    to:      Disk 1-4    all with  18 TB free

    When I run it 'scatters' all to the first disk selected, even though multiple disks selected.

     I'm sure this is user error.

     

     

  4. 11 minutes ago, schreibman said:

    Agreed, Basically what I did is more of a kludge, and I'm sure this is a bad way to address this, but here's what I did:

    create shell to

    cp "/var/log/file.activity.log" to my log dest  "/mnt/user/logs/fileactivity$(date +%Y%m%d%H%M%S).log"  then rm "/var/log/file.activity.log"

    then - created job in user scripts and scheduled

     

    its not going into the syslog itself, just using the same location.

    [Yeah, its a mess,  lots of issues with my approach, not recommending it, was a stop gap for me]

    there is a (worse?) way to do this and have it in syslog: but I can only image this is an even more horrible solution as your unraid syslog now will have all file activity:

      logger -n ${SYSLOG_SERVER} -P 514 -t unraid-file-activity < "/var/log/file.activity.log"

  5. On 10/31/2023 at 3:40 PM, dlandon said:

    The syslog server works with the syslog.  I don't know any way to add the file activity log to that.

    Agreed, Basically what I did is more of a kludge, and I'm sure this is a bad way to address this, but here's what I did:

    create shell to

    cp "/var/log/file.activity.log" to my log dest  "/mnt/user/logs/fileactivity$(date +%Y%m%d%H%M%S).log"  then rm "/var/log/file.activity.log"

    then - created job in user scripts and scheduled

     

    its not going into the syslog itself, just using the same location.

    [Yeah, its a mess,  lots of issues with my approach, not recommending it, was a stop gap for me]

  6. Hello, 

    I've had my Unraid server poweroff unexpectedly a couple of times.   Can you point me to the correct logs that I can inspect to help identify the root cause?

    For example, is there a log I can check to see if the UPS sent a poweroff signal?

    Are there other logs I can look at to help resolve this issue?

    I've had to manually power on the server to resolve.

    tower-diagnostics-20231017-1406.zip

  7. is there a way to log wireguard stats, specifically peer info?  

    this info on a given frequency:

    peer: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      preshared key: (hidden)
      endpoint: 172.58.111.98:47136
      allowed ips: 10.253.0.2/32
      latest handshake: 1 minute, 45 seconds ago
      transfer: 851.32 KiB received, 8.84 MiB sent

     

    *and by chance is exposing the wg interface, like  wg0 to the GUI dashboard network 

    INTERFACE widget?*

    edit:  Dhoope its already there ! scroll down ! VPN 

  8. Maybe a bug? Maybe user stupidity? Backup was created with no issues then hanging and then failing in verification when backing up docker Plex-Media-Server:
     

    [26.04.2023 21:47:31][Plex-Media-Server][info] Backup created without issues
    [26.04.2023 21:47:31][Plex-Media-Server][info] Verifying backup...
    [26.04.2023 21:47:31][Plex-Media-Server][debug] Final verify command: --diff -f '/mnt/user/Backup/CommunityApp/ab_20230426_214539/Plex-Media-Server.tar.zst' '/mnt/user/appdata/Plex-Media-Server' '/mnt/user/appdata/Plex-Media-Server'
    [26.04.2023 21:50:57][Plex-Media-Server][debug] Tar out: tar: Removing leading `/' from member names; tar: Removing leading `/' from hard link targets; tar: /mnt/user/appdata/Plex-Media-Server: Not found in archive; tar: Exiting with failure status due to previous errors
    [26.04.2023 21:50:57][Plex-Media-Server][error] tar verification failed! More output available inside debuglog, maybe.
    [26.04.2023 21:52:34][Plex-Media-Server][debug] lsof(/mnt/user/appdata/Plex-Media-Server)

     

    I noticed this info message, like it was trying to back up the same path twice:

    [26.04.2023 21:45:53][Plex-Media-Server][info] Calculated volumes to back up: /mnt/user/appdata/Plex-Media-Server, /mnt/user/appdata/Plex-Media-Server

     

    And saw the Docker had set AppData Config Path: [2]  = Host Path 2: [3] = "/mnt/user/appdata/Plex-Media-Server":

     

    
    [26.04.2023 21:45:53][Plex-Media-Server][debug] usorted volumes: Array
    (
        [0] => /mnt/user/data
        [1] => /mnt/remotes/10.1.1.55_O
        [2] => /mnt/user/appdata/Plex-Media-Server
        [3] => /mnt/user/appdata/Plex-Media-Server
    )

     

    So I changed the docer paths to be unique:

     

        [2] => /mnt/user/appdata/Plex-Media-Server/Config
        [3] => /mnt/user/appdata/Plex-Media-Server/Transcode

     

    And all was perfect!

    Again, maybe is user stupidity? but if a bug, attatched logs:

     

    ab.debug (1).log - Debug when not working

    ab.debug (2).log - Debug now when working

     

    ab.debug (1).logab.debug (2).log

  9. On 8/27/2022 at 3:39 AM, bonienl said:

    Names highlighted in red mean they exist more than once.

    The location column shows on which disks the same name exists.

    Would love to see similar color added to directories which exist more than once.

  10. Love this plugin and appreciate your work.  Would love the ability to manually add a disk.  As silly as this may seem, my use case is that Unraid contains 90% of my 'infrastructure' and it would be useful to be able to manually add disks, and then I would assign them to an 'offline' disk tray layout.   Just a simple way of keeping 100% of my disk inventory visable.  Also  would use for any 'cold-swap' disks I may have.

  11. So - bumping my topic with a bit of new info.

    Basically, Pool 'hdd' is mounted as read only - I'm sure I filled it and caused btfrs to puke.

    Wanting to 'move' the data into the array, but obviously mover fails as pool is read-only:

    Quote

    Apr  3 17:36:35 Tower root: file: /mnt/hdd/data/[path]/[filename]
    Apr  3 17:36:35 Tower root: create_parent: /mnt/hdd/data/[path]/ error: Read-only file system

     

    I'm thinking this should do the trick, using cp or cpg (think cpg won't show progress if I tee output), creating also an output  file to check : 


     

    screen
    
    cp -vR /mnt/hdd/data/* /mnt/user0/data/  | tee hddfilelist

     

    Is there a better method to move a large (60 TB) number of files off a failing read-only pool?

     


     

  12. Read through this thread, and wanted to see if there was a solution to:

    Enable plugged-in hot spare USB on-board the server.

     

    From a business continuity perspective, I'm responsible for minimizing my response time objectives[RTO], and defense in layers (hot/offsite/cloud)  so my solution needs are:

     

    Layers:

    1. offsite and cloud [yes]
    2. hot [no]

    For "Hot":

    "Hot" Thumb-drive.

    Selectable in EFI like 'USB2" whatever

    Selectable from gui reboot screen

    Restarts / and can* start array as from a "clean shutdown" (*if configured):

    On 4/3/2014 at 1:22 PM, trurl said:

    super.dat has the current state of the array so if you copy that while the array is started and then boot using that copy unRAID will think it had an unclean shutdown and will start a parity check.

     

     

     

     

     

    On 4/4/2014 at 6:46 AM, SSD said:

    I did want to say, that loosing your flashdrive is not catastrophic. If you had no backup, and one day snapped the stick in half as you walked by, you could reconstruct it and have unRaid running (assuming you had another USB with valid key) in about 30 minutes. (Addons might be more complex, but hey it's your data you are most worried about).

     

    I agree, if "opps I dwerped off my USB" - about 30 minutes recovery time.  My concern is my drive fails when I am not at the server.  I need to facter in the average time it will take for me to get there, or to talk someont through it.  Again, all cool, just factoring in the time.

     

    Hot-Spare to keep recovery time down as close to time to do regular restart.

     

    On 4/4/2014 at 2:37 AM, garycase said:

    Now set the 2nd drive aside in a safe location ... and configure your UnRAID server with your primary key.

     

    Agreed, as part of the depth of the solution.  I'd state it as "2nd [3rd,..] drive aside in a safe location

     

    On 4/4/2014 at 4:45 AM, knalbone said:

    two keys and establish a backup routine

     

    tl;dr, looking for HOT PLUGGED IN USB, aka:  a 2 keys 1 cron

     

     

  13. On 6/13/2018 at 5:26 PM, SME Storage said:

    Thank you for your explaination about Unraid and how it works. I could not have put that in better writing.

    I am a little worried that I have not been able to find the right words to get the message over. 

    I agree on both points,  @pwm's explaination about Unraid and how it works was very nice.

     

    However, my tl;dr on this is: If I'm accountable to manage [1+ TB of data] OR [0+ b of critical data] I would deploy a hot spare as routine part of my business continuity plan.

     

    Do any docker/apps/user scripts provide Hot Spare functionality at array or cache pool  ?

     

  14. Fix Common Problems Identifies this issue:

     

    Share log references non existent pool cache. If you have renamed a pool this will have to be adjusted in the share's settings

    Clicking Share Settings results in:  ! Share 'log' has been deleted.

     

    I can see directories : /media/user[0]/log both exist.

    SyslogSettings set to share named "logs", working as expected.

     

    Attempts to resolve:

    1. SyslogSettings set to share "log", does not appear as an option
    2. Add share named "log" results (as expected):  Invalid share name. Do not use reserved names

     

    Idea I thought to try, seems like an awful idea:

    rm -f /media/user?/log

     

    Thinking "maybe" caused somewhere around here:

     

    1. SyslogSettings reference  "log"
    2. "log" pointed to pool cache named "
      Spoiler

      cache

      "
    3. deleted pool "cache"
    4. created new pool "nvme"
    5. created new share "logs"
    6. repointed  SyslogSettings to share:"logs" pool:"nvme" 

    And if I had to guess it was in step #2.

    I recall a page listing some reserved words, like UNRAID, but can't locate the url.

     

    Resolution

    Currently, I ignore the finding.

     

    Any other way to resolve?

     

     

  15. Hi,  I have a btfrs pool device (hdd) that is failing, showing as rw when array starts then as  completely full and read-only within a couple minutes.   

     

    To manually move the files to the array, would this be the correct process/or would there be a better method?   It has about 60 TB of data needing to be copied if that makes a difference.

    • Understanding  user0 is without items on the cache drive
    • pool drive = 'hdd'
    • folder to copy = 'data'

     Think opening terminal and:

    cp -R /media/hdd/data/.*  /media/user0/data/

     

    Thanks in advance


     

    tower-diagnostics-20230331-2253.zip

  16. 18 hours ago, JorgeB said:

    You must set the minimum free space for the share, it should be set to twice the largest file you expect to transfer, or when cache is full (and later arrays disk) it will fail.

    THANKS!  (... aaand it was right there on the screeen as well "

    Choose a value which is equal or greater than the biggest single file size you intend to copy to the share. Include units KB, MB, GB and TB as appropriate, e.g. 10MB.

    "

    • Like 1