Jump to content

bclinton

Members
  • Posts

    107
  • Joined

  • Last visited

Posts posted by bclinton

  1. Hi friends. What setting would I change to have the script only keep the latest backup? In other words, after it backs up it would delete the previous backup folder and my backup drive would simply have 1 (the latest) backup folder. I changed the days retention to 1, will that do it? I backup once a week on Mondays.

     

    Would this do the trick?

     

    # keep backups of the last X days
    keep_days=1

    # keep multiple backups of one day for X days
    keep_days_multiple=0

    # keep backups of the last X months
    keep_months=0

    # keep backups of the last X years
    keep_years=0

  2. On 12/11/2023 at 12:36 PM, mgutt said:

    Yes.

     

    Can be confusing, too: If you would delete the 254G backup and repeat the command, the 154G backup becomes "bigger".

     

    And of course: If you delete the 6.0T backup, the 154G backup will "become" 6.x T.

     

    Sorry for the ?'s but I have one more. :)

     

    In this example can I simply keep the original backup and the last backup and be fine? In other words simply delete the 1223 backup to save space and only keep the 1225 and the 1127. The next week backup will be 010124 so I can delete the 1225 backup once the 0101 is finished? 

     

    Capture.thumb.JPG.b06c9bdd0e34cd786000ceefe24f1d31.JPG

  3. 19 minutes ago, mgutt said:

    Yes

     

    Interesting....still trying to wrap my little brain around hardlinks.....on my backup drive (below) I can delete the 254G and the 154G folder and run the back up again and it will backup all the files back to the original 6TB backup on 11-27? My driive is 8TB and what I have been doing is formatting it when it gets close to 8TB and running a fresh backup and stating over. Being able to delete the incremental only would save a little time.

     

    6.4T    /mnt/disks/backup
    6.0T    /mnt/disks/backup/20231127_181243
    254G    /mnt/disks/backup/20231204_033001
    154G    /mnt/disks/backup/20231211_033001

  4. Hi friends. Still trying to wrap my head around zfs. I followed SI tutorial and converted my 2 drive BTRFS cache pool to ZFS. . All seems fine and I created a snapshot for all of the datasets. I also created a dataset for each docker folder inside of appdata. Before this process I ran appdata backup plugin that created a zip file once a week of the appdata folder and placed it in a backup share. I then copied the backup share to offsite storage. Fast forward to now.....the appdata backup plugin no longer works for my ZFS dataset so I turned it off. My struggle is understanding how I can move the dataset over to a separate backup location in the event something bad happens to my 2 drive zfs cache pool. In other words how can I backup my datasets and if something does happen, I will be able to restore the latest snapshot to a repaired ZFS cache pool.

     

    Thanks for any insight :)

  5. I am seeing the same thing. After several reboots my shares continue to be missing. I found this thread and disabled docker, rebooted and shares are back. I enabled docker and turned all containers except TDARR back on. The shares are still showing up. Is there any more info on this as far as TDARR is concerned? I have left it off for now until I can find out more info. Attaching my latest diag file.

     

    Thanks

     

    unraid-diagnostics-20230429-1406.zip

  6. 7 minutes ago, mgutt said:

    You did not change the paths 😉 (/dst is the default path of the script)

     

    EDIT: Changed the default paths to avoid RAM flooding if someone forgets to change the paths.

     

    You are correct. I copied my changes below that instead of replacing it. All is fine now, thanks! :)

  7. 21 minutes ago, mgutt said:

    Released v1.5 released

    - fixed hardlink test

     

    FYI - I just upgrade from 1.4 which was fine and am now getting the error that enabler reported and the script aborts.

     

    Script Starting Nov 01, 2022 17:57.30

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/backup/log.txt

    # #####################################
    created directory /dst/link_dest
    >f+++++++++ empty.file
    --link-dest arg does not exist: /dst/link_dest
    removed '/tmp/_tmp_user.scripts_tmpScripts_backup_script/empty.file'
    Error: Your destination /dst does not support hardlinks!
    Script Finished Nov 01, 2022 17:57.33

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/backup/log.txt

     

     

  8. Hi friends. Trying to get grafana running on unraid 6.11.1 by following the instructions at Unraid | Unraid Data Monitoring with Prometheus and Grafana. I am stuck at step 10. Grafana container starts but when I try to launch the webui I get this error

     

    This page isn’t working right now

    192.168.1.24 redirected you too many times.

    To fix this issue, try clearing your cookies.

    ERR_TOO_MANY_REDIRECTS

     

    The log repeats this....

    logger=context userId=0 orgId=0 uname= t=2022-10-20T18:37:34.404787385-05:00 level=info msg="Request Completed" method=GET path=/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/login status=302 remote_addr=192.168.1.118 time_ms=0 duration=407.592µs size=783 referer= handler=notfound
    logger=cleanup t=2022-10-20T18:41:45.061524809-05:00 level=info msg="Completed cleanup jobs" duration=7.659615ms

     

    Just looking for a suggestion. Thanks.
     

  9. 1 hour ago, itimpi said:

    If you have space for it you could start by upgrading the parity drive and then adding the old parity drive as a data drive to give 8TB additional usable space.  Would avoid the need to have 2x12TB from the outset.

     

    I considered that but I wanted to see if I could reduce the power used by the server. I started with 16 TB data and 1 8TB parity but realized that I would be at least a couple of years from needing more that 8 TB of data. After I removed the 8TB drive my power usage went down almost 20 watts at idle. 

  10. Hi friends! I have been using unraid for about 18 months now. I have 2 8 TB drives with 1 being parity along with 2 1tb cache SSD's. It's been humming right along. I am up to about 5 TB on the data drive. With the prices of big drives coming down I am going to be swapping my 8TB parity and data drive soon. What would be the suggested process in doing this? I was thinking I should swap the parity drive first, let it rebuild with a new 12 TB drive. Once all is good, swap the data drive with the new 12 TB drive. Would that be the best way to accomplish this? At the rate I add data I figure this would be good for at least 3-4 years for me.

  11. 47 minutes ago, LyDjane said:

    nobody can help here? :(

    The way I backup the appdata folder is using the backup/restore appdata plugin and store it in a backup share along with the plex backup and various other backup files. I then backup the backup share instead of the appdata folder with the rsync script. 

  12. 18 minutes ago, MX-Hero said:

     

    i want to try this because i get failed backups if i backup my appdata folder. Is it possible to start the stopped container after backup finished?

    The way I handle my appdata folder along with plex is back it up to a backup share with appdata backup folders once a week. I do this with other folders that are not really live backup friendly.....that way the script backups a nice clean tar file.

  13. On 1/15/2022 at 1:02 PM, SpaceInvaderOne said:

    I am in the process of remaking Macinabox & adding some new features and hope to have it finished by next weekend.

    I am sorry for the lack of updates recently on this container.

    Thankyou @ghost82 for all you have done in answering questions here and on github and sorry i havent reached out to you before.

     

    Awesome news! Can't wait!

  14. 21 hours ago, Squid said:

    Post the entire zip file

     

    Update. This morning I did a shutdown on the server rather than a reboot. Figured I would try the VM setup again. This time it started normal. Not sure what or why I was having problems before, but my guess is the shutdown seems to have resolved it. Before I would simply restart the server.

     

    Thanks again for helping with this. It is much appreciated. 

  15. Hi friends. I had 2 existing VM's Windows 10 and Macinabox that were working fine. I haven't used them for months. I recently was trying to install an Ubuntu server VM which I have done so in the past with no issues. No matter what settings I used it would not work, so I tried to run my windows 10 and Macinbox VM's and they too would not work. Looking at the log files for all 3 VM's they end with the same entry that the log below has (qxl_send_events: spice-server bug: guest stopped, ignoring). I have updated to 6.10RC2 since the last time I used VM's...not sure if that is related. I am now unable to add any VM's as I continue to get the same error in the log file.

     

    Just looking for something to try. I have rebooted the server and deleted the libvert.img file which deleted my 2 existing VM's. 

     

     

    LOG

    -smp 2,sockets=1,dies=1,cores=1,threads=2 \
    -uuid f8fe7cd9-363a-83bb-5c25-481cd180cd51 \
    -no-user-config \
    -nodefaults \
    -chardev socket,id=charmonitor,fd=35,server=on,wait=off \
    -mon chardev=charmonitor,id=monitor,mode=control \
    -rtc base=utc,driftfix=slew \
    -global kvm-pit.lost_tick_policy=delay \
    -no-hpet \
    -no-shutdown \
    -boot strict=on \
    -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
    -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
    -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
    -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
    -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
    -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \
    -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
    -blockdev '{"driver":"file","filename":"/mnt/user/domains/Ubuntu/vdisk1.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
    -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-2-format,id=virtio-disk2,bootindex=1,write-cache=on \
    -blockdev '{"driver":"file","filename":"/mnt/user/isos/ubuntu-20.04.3-live-server-amd64.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
    -device ide-cd,bus=ide.0,drive=libvirt-1-format,id=sata0-0-0,bootindex=2 \
    -netdev tap,fd=37,id=hostnet0 \
    -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:87:42:04,bus=pci.1,addr=0x0 \
    -chardev pty,id=charserial0 \
    -device isa-serial,chardev=charserial0,id=serial0 \
    -chardev socket,id=charchannel0,fd=38,server=on,wait=off \
    -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
    -device usb-tablet,id=input0,bus=usb.0,port=1 \
    -audiodev id=audio1,driver=none \
    -vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \
    -k en-us \
    -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \
    -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/0 (label charserial0)
    qxl_send_events: spice-server bug: guest stopped, ignoring

×
×
  • Create New...