Jump to content

Hawkins12

Members
  • Posts

    130
  • Joined

  • Last visited

Posts posted by Hawkins12

  1. So every Tuesday morning, I have "Appdata Backup/Restore v2" run a backup of my appdata (among other) folders.  The initial backup is saved on my nvme/cache drive until it gets moved during the the day after successful completion.  Then, when I go to run Mover, it successfully creates the shell folder (ie. the appdata folder titled with the date of the backup) on an available array disk (with sufficient space) however, for some reason, the .tar file doesn't actually get moved.  The last few weeks, i've had to go in and move manually via MC which I'd rather not have to do.  

     

    My question is why would this be occurring?  The .tar file is about 465 gb and I have 8 drives on the array with available space that can store this (anywhere from 627 gb to 2.42 TB of available space on each of the 8 drives).  

     

    I was able to copy the log here when mover ran:  

     

    mvlogger: Log Level: 1
    mvlogger: *********************************MOVER -SHARE- START*******************************
    mvlogger: Wed Feb 23 01:41:27 EST 2022
    mvlogger: Share supplied backups
    Sharecfg: /boot/config/shares/backups.cfg
    mvlogger: Cache Pool Name: cache_nvme
    mvlogger: Share Path: /mnt/cache_nvme/backups
    mvlogger: Complete Mover Command: find "/mnt/cache_nvme/backups" -depth | /usr/local/sbin/move -d 1
    Feb 23 01:41:27 HawkinsUnraid move: move: skip /mnt/cache_nvme/backups/appdata backup/[email protected]/CA_backup.tar
    mvlogger: Wed Feb 23 01:41:27 EST 2022
    mvlogger: ********************************Mover Finished*****************************

     

    What would be causing the skip?  i assume it has something to do with the Mover command directly above the skip command; however, I cannot discern why it wouldn't move.  In this particular case, mover created the shell folder "2022-02-22@05:00" on my disk 27 which has 2.42TB of space remaining; however, it doesn't actually move the .tar file associated with that folder.  

     

    Thanks!

  2. So every Tuesday morning, I have "Appdata Backup/Restore v2" run a backup of my appdata (among other) folders.  The initial backup is saved on my nvme/cache drive until it gets moved during the the day after successful completion.  Then, when I go to run Mover, it successfully creates the shell folder (ie. the appdata folder titled with the date of the backup) on an available array disk (with sufficient space) however, for some reason, the .tar file doesn't actually get moved.  The last few weeks, i've had to go in and move manually via MC which I'd rather not have to do.  

     

    My question is why would this be occurring?  The .tar file is about 465 gb and I have 8 drives on the array with available space that can store this (anywhere from 627 gb to 2.42 TB of available space on each of the 8 drives).  

     

    I was able to copy the log here when mover ran:  

     

    mvlogger: Log Level: 1
    mvlogger: *********************************MOVER -SHARE- START*******************************
    mvlogger: Wed Feb 23 01:41:27 EST 2022
    mvlogger: Share supplied backups
    Sharecfg: /boot/config/shares/backups.cfg
    mvlogger: Cache Pool Name: cache_nvme
    mvlogger: Share Path: /mnt/cache_nvme/backups
    mvlogger: Complete Mover Command: find "/mnt/cache_nvme/backups" -depth | /usr/local/sbin/move -d 1
    Feb 23 01:41:27 HawkinsUnraid move: move: skip /mnt/cache_nvme/backups/appdata backup/[email protected]/CA_backup.tar
    mvlogger: Wed Feb 23 01:41:27 EST 2022
    mvlogger: ********************************Mover Finished*****************************

     

    What would be causing the skip?  i assume it has something to do with the Mover command directly above the skip command; however, I cannot discern why it wouldn't move.  In this particular case, mover created the shell folder "2022-02-22@05:00" on my disk 27 which has 2.42TB of space remaining; however, it doesn't actually move the .tar file associated with that folder.  

     

    Thanks!

  3. On 8/13/2021 at 2:11 PM, CS01-HS said:
    [ $? != 0 ]]; then
        # Alert
        echo "Critical Ba

     

    On 8/13/2021 at 2:11 PM, CS01-HS said:

    If Windows backs up on the same day every week you could add a user script to unRAID that runs later or the next day.

    I just wrote one similar for another device.

     

    You may have to tweak it slightly for unraid/your use case. Carefully test it because it calls rm and you don't want to accidentally wipe your array (or at least comment out the rm line until you're sure it works.)

     

    #!/bin/bash
    
    #
    # Backup critical files
    #
    
    ## Options
    
    # Number of backups to keep
    NUM_TO_KEEP=6
    # directory to be backed up
    BACKUP_SOURCE="/mnt/hdd/share/unraid"
    # directory to store backups
    BACKUP_DEST="/mnt/hdd/share/backup"
    
    # Begin backup
    dest_file="${BACKUP_DEST}/unraid-$(date '+%Y-%m-%d')"
    echo "Archiving critical files to: ${dest_file}.tar.gz"
    
    tar -czf ${dest_file}.tar.gz ${BACKUP_SOURCE}
    if [[ $? != 0 ]]; then
        # Alert
        echo "Critical Backup Archiving FAILED"
        exit 1
    fi
    
    # make it readable by all
    chmod a+r ${dest_file}.tar.gz
    
    # Clear out all but X most recent
    (cd ${BACKUP_DEST}/ && rm `ls -t *.tar.gz | awk "NR>$NUM_TO_KEEP"`) 2>/dev/null
    
    # Alert succes
    echo "Critical Backup Archiving Completed"
    exit 0

     

    Question on this and sorry for my delay, I got busy and didn't get a chance to fully explore.

     

    So I assume you run this in the "Console" of Unraid to make it work.  And based on the backup settings, I assume it'll continue to run indefinitely.  How do you end this?  How do you modify this?  

     

    Also, on the echo commands, that's the message displaying Successful or Failed -- does that come through as an Unraid notification?  I've familiarized myself with the script itself; however, just a little confused on execution since I am so used to Dockers/Apps.  


    Thanks!

  4. Thanks all for the comments.  Makes me feel better. 

    30 minutes ago, trurl said:

    Or just upgrade one at a time so you have one valid parity drive. If you write anything to the array while rebuilding parity the old parity disk is invalid.

    I think I'll upgrade like this.  This seems to make most sense.  Eventually the 8tb will be redeployed as data disks but would be nice to have them as backup in case parity messes up

     

    34 minutes ago, itimpi said:

    Why do you want to use the Parity Swap procedure?    If you simply want to upgrade both parity drives then stop the array: Unassigned the current parity drives; start the array to make Unraid ‘forget them’ stop the array; assign the new parity drives; start th3 array to start building the contents of the new parity drives.

     

    I think this is a case of bad terminology on my end.  I guess I thought I was doing a parity swap but really just upgrading.  I guess I confused what I wanted to do vs. what the instructions provided.  

     

    Thanks all for your input!

  5. Running Unraid 6.9.2 and had a question on the parity swap procedure.  Currently, I have dual parity drives in place -- 2 - 8tb drives.  I am looking to replace both of these to 14tb drives for future proofing, etc.  I was able to find the instructions at the link below however had a couple questions regarding the process as the process only addresses one parity drive swap and not a scenario where I already have dual parity

     

    https://wiki.unraid.net/The_parity_swap_procedure

     

    1) Are the instructions still valid for 6.9.2.  The concern I have is around the downtime of the array -- I assume will take 2 days as thats the length of my last parity check (8tb).  I'd like to avoid server downtime if possible and wasn't sure if there was a way to do this given I have dual parity

     

    2) Are the procedures the same for each parity drive?  Do I do one first, then the other?  (means 4 days of total downtime potentially).

     

    3) any tips or suggestions?

  6. On 10/21/2021 at 12:22 PM, Ford Prefect said:

    Ah, yes...you are right.
    It is always the same NIC that is failing and then recovering again. It is the one named eth0, running the Intel igb driver, from your picture above.

    Check the bios, especially energy saving features (often called ASPM) and try disabling these...if possible, only for the NICs.

    You can test the second, if this will behave better, as it uses a different driver in Linux/unraid (e1000e).

    If you only want to deploy a single link (bond with active backup mode will allow for higher availability, as it will switch over in case of a problem) you could disable bonding in unraid config. Should you keep it, deploy both ports with a patch cable to your switch.

    Gesendet von meinem SM-G780G mit Tapatalk
     

     

    So I am still testing this but I did not mess with bios settings just yet.  I did plug another ethernet cord from the 2nd LAN spot to the switch and it seemed to have made a difference.  I havent had the issue for two days.  Perhaps i have a bad NIC?

  7. 3 hours ago, Ford Prefect said:

    Ah, yes...you are right.
    It is always the same NIC that is failing and then recovering again. It is the one named eth0, running the Intel igb driver, from your picture above.

    Check the bios, especially energy saving features (often called ASPM) and try disabling these...if possible, only for the NICs.

    You can test the second, if this will behave better, as it uses a different driver in Linux/unraid (e1000e).

    If you only want to deploy a single link (bond with active backup mode will allow for higher availability, as it will switch over in case of a problem) you could disable bonding in unraid config. Should you keep it, deploy both ports with a patch cable to your switch.

    Gesendet von meinem SM-G780G mit Tapatalk
     

    Thanks so much for these.  Def. going to try these when i get home from business travel.  Appreciate your help and time!

  8. 3 hours ago, Ford Prefect said:

    ...you are running a bonded interface, in active backup mode, build from your two Intel based NICs.

    Apparently one of these is having problems and when that happens, the active member of the bond will switch over to the other NIC.

     

    You need to find out what is actualy killing the link.

    Does this happen to only the first NIC or does this also happen to the other one?

    You could try by changing the enumeration, by swapping the MAC addresses for eth0 and eth1.

    unraid will normally use eth0 as the default, active NIC.

     

    The first line of your log indicates that this is not a cabling issue but an issue with your PCI(e) interface the NIC sits on.

    Also, the Address for both NICs indicate, that these are not on the same BUS address, hence not a DUAL card.

    Try to disable ASPM in BIOS for the NICS and/or PCIe.

     

    If the one(s) that fail are on a external card, try to use/deploy into a different slots.

     

     

     

    Thanks.  I guess I'll check out Board BIOS settings.  I'll note that this is not an external card.  All NIC's would be integrated with the mobo.  I have no separate cards.  For reference the Mobo is a Supermicro X12SCZ-F.   I know there is a bios update out there for it -- i'll contact Supermicro to see if it relates to the NIC's.  I'm always hesitant on the bios updates.  

     

    Also, I didn't know I had two NIC's.  I took a quick look at the Mobo specs and guess you are right.  It shows:

    Network Controllers:

    Single LAN with Intel® PHY I219LM LAN controller

    Single LAN with Intel® Ethernet Controller I210-AT

     

    Which actually makes sense because I have 3 network ports in the rear i/o:   IPMI (Supermicro), LAN2, and LAN1.  I am currently only using the LAN2 port.

     

    Would you also suggest switching to the LAN1 port and using that?  Or wiring both up to the switch?  Like i said, I am horrible with my knowledge of networking and am amateur at best :).  Thanks for your help and time in responding!  Note that LAN2 seems to be linked to the 5E:B5 mac address in my picture above.  LAN1 is linked to 5E:B4 which again is not wired into my network switch.

  9. I also figured it might be beneficial to show my Unraid network settings.  See picture:  I'll note I am confused why I have two Interfaces?  I am in no way a technical expert with networking (equipment) but confused on why I have eth0 and eth1 that use the same MAC address.  I did "Port Down" the eth1 interface as reflected in the image; however, I still got the issue noted above after that change.  

    image.thumb.png.7be8761432a03df230af2685d4ce4484.png

  10. So I have been having problems with running Live TV through Plex and would swear up and down it was a Plex issue.  I would check logs of Plex and constantly encounter issues where Live TV (communication between Plex and HD Homerun device) would stutter.  I was believing this was a Plex/HD Homerun device but then I started thinking about my server dropping my mapped network drives among other "odd" occurances like when using PuTTy where it just seems to have trouble on the network.  I decided to run Live TV while monitoring server logs and the second my live tv started stuttering, there are the logs that appear in Unraid.  I am starting to believe this isn't a plex issue but some sort of network issue related to either Unraid or my hardware.  I know this isn't the typical format of providing logs but I wanted to provide the specifics that occurs to the issue.  I can provide additional diagnostics if necessary but wasn't sure if something stood out from the below.  For reference, server ip is 192.168.1.119 (which I am sure is clear below).    Thanks in advance fro your help.  

     

    Oct 20 11:56:57 HawkinsUnraid kernel: pcieport 0000:00:1c.5: AER: Uncorrected (Non-Fatal) error received: 0000:04:00.0
    Oct 20 11:56:57 HawkinsUnraid kernel: igb 0000:04:00.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
    Oct 20 11:56:57 HawkinsUnraid kernel: igb 0000:04:00.0: device [8086:1533] error status/mask=00004000/00000000
    Oct 20 11:56:57 HawkinsUnraid kernel: igb 0000:04:00.0: [14] CmpltTO
    Oct 20 11:56:57 HawkinsUnraid kernel: bond0: (slave eth0): link status definitely down, disabling slave
    Oct 20 11:56:57 HawkinsUnraid kernel: device eth0 left promiscuous mode
    Oct 20 11:56:57 HawkinsUnraid kernel: bond0: now running without any active interface!
    Oct 20 11:56:57 HawkinsUnraid kernel: br0: port 1(bond0) entered disabled state
    Oct 20 11:56:57 HawkinsUnraid kernel: pcieport 0000:00:1c.5: AER: device recovery successful
    Oct 20 11:56:58 HawkinsUnraid dhcpcd[2125]: br0: carrier lost
    Oct 20 11:56:58 HawkinsUnraid avahi-daemon[12196]: Withdrawing address record for 192.168.1.119 on br0.
    Oct 20 11:56:58 HawkinsUnraid avahi-daemon[12196]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.119.
    Oct 20 11:56:58 HawkinsUnraid avahi-daemon[12196]: Interface br0.IPv4 no longer relevant for mDNS.
    Oct 20 11:56:58 HawkinsUnraid dhcpcd[2125]: br0: deleting route to 192.168.1.0/24
    Oct 20 11:56:58 HawkinsUnraid dhcpcd[2125]: br0: deleting default route via 192.168.1.1
    Oct 20 11:56:58 HawkinsUnraid dnsmasq[15575]: no servers found in /etc/resolv.conf, will retry
    Oct 20 11:56:59 HawkinsUnraid ntpd[2200]: Deleting interface #4820 br0, 192.168.1.119#123, interface stats: received=28, sent=28, dropped=0, active_time=229 secs
    Oct 20 11:56:59 HawkinsUnraid ntpd[2200]: 216.239.35.0 local addr 192.168.1.119 -> <null>
    Oct 20 11:56:59 HawkinsUnraid ntpd[2200]: 216.239.35.4 local addr 192.168.1.119 -> <null>
    Oct 20 11:56:59 HawkinsUnraid ntpd[2200]: 216.239.35.8 local addr 192.168.1.119 -> <null>
    Oct 20 11:56:59 HawkinsUnraid ntpd[2200]: 216.239.35.12 local addr 192.168.1.119 -> <null>
    Oct 20 11:57:01 HawkinsUnraid kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
    Oct 20 11:57:01 HawkinsUnraid kernel: bond0: (slave eth0): link status definitely up, 1000 Mbps full duplex
    Oct 20 11:57:01 HawkinsUnraid kernel: bond0: (slave eth0): making interface the new active one
    Oct 20 11:57:01 HawkinsUnraid kernel: device eth0 entered promiscuous mode
    Oct 20 11:57:01 HawkinsUnraid dhcpcd[2125]: br0: carrier acquired
    Oct 20 11:57:01 HawkinsUnraid kernel: bond0: active interface up!
    Oct 20 11:57:01 HawkinsUnraid kernel: br0: port 1(bond0) entered blocking state
    Oct 20 11:57:01 HawkinsUnraid kernel: br0: port 1(bond0) entered forwarding state
    Oct 20 11:57:02 HawkinsUnraid dhcpcd[2125]: br0: rebinding lease of 192.168.1.119
    Oct 20 11:57:06 HawkinsUnraid dhcpcd[2125]: br0: probing address 192.168.1.119/24
    Oct 20 11:57:11 HawkinsUnraid dhcpcd[2125]: br0: leased 192.168.1.119 for 86400 seconds
    Oct 20 11:57:11 HawkinsUnraid dhcpcd[2125]: br0: adding route to 192.168.1.0/24
    Oct 20 11:57:11 HawkinsUnraid dhcpcd[2125]: br0: adding default route via 192.168.1.1
    Oct 20 11:57:11 HawkinsUnraid avahi-daemon[12196]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.119.
    Oct 20 11:57:11 HawkinsUnraid avahi-daemon[12196]: New relevant interface br0.IPv4 for mDNS.
    Oct 20 11:57:11 HawkinsUnraid avahi-daemon[12196]: Registering new address record for 192.168.1.119 on br0.IPv4.
    Oct 20 11:57:11 HawkinsUnraid dnsmasq[15575]: reading /etc/resolv.conf
    Oct 20 11:57:11 HawkinsUnraid dnsmasq[15575]: using nameserver 192.168.1.1#53
    Oct 20 11:57:12 HawkinsUnraid ntpd[2200]: Listen normally on 4821 br0 192.168.1.119:123
    Oct 20 11:57:12 HawkinsUnraid ntpd[2200]: new interface(s) found: waking up resolver

  11. So I keep getting warning messages because my Docker Image file is getting full.  The culprit appears to be NordVPN:

     

    image.png.9c51b1b1aa66b0ad39d679fd1d492263.png

     

    How do I keep NordVPN from writing to my docker img?

  12. On 9/8/2021 at 11:09 AM, trurl said:

    Go to Docker page, click Container Size button at bottom, and post a screenshot.

     

    Odd that it runs up to 90% full when this shows only 8.6GB.  I honeslty didn't even pay attention to this button to check container size.  Maybe I'll check it when I get the notification again.  I am trying to recall what I was doing when i received the 90% full warning but can't remember off-hand.  

     

    image.png.c24706ae2ea121578c853a3df713181a.png

     

    Also, i am not sure whats going on with Disk 7.  Right now, it has a docker.img file in it at 21.5GB.  Perhaps at one point my Docker.img got too big and moved there?  There is an isos folder in there as well but no files that I can see.

  13. So I've tried researching this issue but I think something has changed with the newer unraid versions.  I keep getting the messages similar to:

     

     Docker image disk utilization of 96%
    Description: Docker utilization of image file /mnt/cache_nvme/system/docker/docker.img

     

    My Cache_nvme is 2TB and no where near full.  Some of the previous issues are from a couple years ago and it indicates you could stop the docker, go to advanced view, and increase size there; however, that doesn't appear to work any longer.  How do I settle this message and allocate more space to the docker.img?

  14. 7 minutes ago, Hawkins12 said:

    Looks like something's amiss.  Here is the results -- I ran twice for good measure.  I did replace the IP with my internal IP for the plex container (and webhook) as well.  

    image.png.4e910c7a8c928564a7ef086f13f2ddf6.png

    Ok, disregard, I checked the logs of the PlexAnnouncer container and I did get the result you postedimage.png.5a1d3eff63ffe2cab26eaa0ba83431bb.png

     

    And in the process, I think I figured out the issue.  In Plex Settings, I added:

    image.png.ca6fa89f1d992d605d18b59be3a22fcf.png

     

    I think this needs to read :32500 and not :32400.  I just need to test it.  

  15. 5 hours ago, JohnnyP said:

    Hmm, looks alright at first glance. Stupid question, but are you able to communicate from the Plex container to the PlexAnnouncer container? Try running the following from inside you Plex container:

    curl -X POST 192.168.40.201:32500/PLEX_WEBHOOK_TOKEN

    Obviously replace PLEX_WEBHOOK_TOKEN with your token. You should see something like this in your PA logs:

    image.png.e17bced66c7c9b497de3fb087ff8c85a.png

     

     

    Looks like something's amiss.  Here is the results -- I ran twice for good measure.  I did replace the IP with my internal IP for the plex container (and webhook) as well.  

    image.png.4e910c7a8c928564a7ef086f13f2ddf6.png

  16. 37 minutes ago, jamerperson said:

    It has to be something new. Removing and re-adding a movie will not trigger an event.

    I just added a new TV series to my TV Series folder.  Nothing came through to discord.  Is there generally a delay?

     

    I also added a new movie too.  Still didn't report to discord

  17. Ok I am at a loss.  I feel like I have everything set up correctly but not getting push notifications in Discord channels.  Here's what I got:

    image.png.74d56334f955311b0d2d8cd03b07020c.pngimage.png.51f2957b130aa93d1667a501731c7aeb.pngimage.png.c9fd060af4a23e537195e6c88973a0a8.png So I think I have my Plex settings right.  Here's my docker settings:

    image.png.679f9d82c2dfefe2dba4aa1743baf932.png

    image.png.20db489a82f583224c11c0a3ec2f7466.png

     

    And of course the Discord Webhook URL matches the "copy webhook URL" from discord.  

     

    Finally, checking the logs of PlexAnnouncer, it is running and I see:

    image.png.049505130871f5a20097dc6e0ba80eb7.png --- That's it.  

     

    I added a couple shows to one of my libraries in Plex and nothing showed up in Discord.  What did I do wrong?

     

     

  18. Requesting assistance from anyone familiar with how to set up environment variables for Metube.  Here's what I am trying to accomplish.  I need to "mount" a cookies.txt file so Metube knows how to properly pull the cookies file when retrieving video.  I have a cookies.txt file generated I just need to set an environment variable to recognize it.  Below are instructions from the Dev but I am lost on a couple points.  

    Quote

    But the idea is that just as you mount your /downloads folder as a docker volume, you need to mount the cookies file into the container in the same way (as a volume). Let's say you mount it in the root directory as /cookies.txt in the container. Then you can instruct the yt-dlp copy that MeTube runs to use this cookie file. That you will have to do via the YTDL_OPTIONS environment variable. You'll have to set it to {"cookiefile":"/cookies.txt"}. If you're doing it directly via YAML, make sure you properly escape this string. 

     

    I am not sure how to set up the environment variable to recognize the cookies file since Metube doesn't have an appdata folder, etc.  Any assistance would be helpful. 

     

    Based on the above, I mounted the path as follows (note the cookies.txt file is in the "/downloads/Metube/ directory:

    image.png.0b578dfec6a7d085564696123150dc78.png

     

    And here is what I have for Env. Variable...What am I doing wrong??

    image.png.e501c0be0cf03ca5a9845637fe9220f2.png

     

     

  19. Been running Unraid just fine for several months.  Last night, I had a power outage.  My UPS picked up as expected and did a shutdown also as expected.  Received the email below:

    Event: Unraid Server Alert
    Subject: UPS Alert
    Description: Remaining battery runtime below limit on UPS HawkinsUnraid. Doing shutdown.
    Importance: alert

     

    So I wake up this morning and boot up the server and now I am stuck at image.png.d1ec3f26862e18d954b92039192f1c25.png

     

    Won't do anythign else?  What's the problem?  I know previously I read this can be a RAM issue but given I've had months on the server with restarts and starts and that didn't seem to be the issue.  Another thing I read was UEFi and my motherboard is set to run in UEFI.  What else can i try>?

  20. 5 minutes ago, CS01-HS said:

    You can even include unraid notifications (in addition to the "echo" printouts.)

     

    Notices ("normal") appear in green:

    /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Backup Critical Files" -d "Backup complete" -i "normal"

     

    and alerts in red:

    /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Error" -s "Backup Critical Files" -d "Backup error" -i "alert"

     

    Haha i wish i had your brains.  Thanks for this.  I am still studying the script and trying to understand.  Appreciate your help!

    • Haha 1
×
×
  • Create New...