Jump to content

Keichi

Members
  • Posts

    88
  • Joined

  • Last visited

Posts posted by Keichi

  1. Thanks!

    But, this is for Homepage (https://github.com/gethomepage/homepage) which uses Glances integration (https://github.com/nicolargo/glances).

     

    I will ask directly on Glances' Github for more information.

     

    I just wanted to understand better the issue and from you gave me, something is possible :

    root@Yggdrasil:~# zfs get -Hp -o value available,used /mnt/cache
    966035955456
    963024068864

    Values seem correct to me.

     

    Thanks again JorgeB!

     

    K.

    • Like 1
  2. Hello,

     

    On my unraid server, i have this cache pool :

    707912581_Capturedecran2024-02-11a18_33_52.thumb.png.b9a5c2e2e31344de6b313beb9195621d.png

     

    It is 3x 1TB drives, with one mirror, for a total of 1.9 Tb useable capacity.

    On the unraid console, when i do :

    df -h

     

    I have this output :

    root@Yggdrasil:~# df -h
    Filesystem                     Size  Used Avail Use% Mounted on
    rootfs                          24G  1.6G   22G   7% /
    tmpfs                           32M  2.2M   30M   7% /run
    /dev/sda1                       58G  2.5G   55G   5% /boot
    overlay                         24G  1.6G   22G   7% /lib
    overlay                         24G  1.6G   22G   7% /usr
    devtmpfs                       8.0M     0  8.0M   0% /dev
    tmpfs                           24G     0   24G   0% /dev/shm
    tmpfs                          128M  1.5M  127M   2% /var/log
    tmpfs                          1.0M     0  1.0M   0% /mnt/disks
    tmpfs                          1.0M     0  1.0M   0% /mnt/remotes
    tmpfs                          1.0M     0  1.0M   0% /mnt/addons
    tmpfs                          1.0M     0  1.0M   0% /mnt/rootshare
    (array disks here)
    cache                          939G   23G  916G   3% /mnt/cache
    cache/system                   933G   17G  916G   2% /mnt/cache/system
    cache/domains                  1.4T  417G  916G  32% /mnt/cache/domains
    cache/appdata                  916G  2.3M  916G   1% /mnt/cache/appdata
    cache/appdata/mosquitto        916G  256K  916G   1% /mnt/cache/appdata/mosquitto
    cache/appdata/grafana          916G  1.3M  916G   1% /mnt/cache/appdata/grafana
    cache/appdata/homeassistant    916G   93M  916G   1% /mnt/cache/appdata/homeassistant
    cache/appdata/mysql            916G  119M  916G   1% /mnt/cache/appdata/mysql
    cache/appdata/nginx            916G  1.0M  916G   1% /mnt/cache/appdata/nginx
    (etc for others containers)

     

    The "cache" is reported to be 939GB with only 23Gb used.

     

    I dont really understand this value.

     

    Is it because of ZFS ?

     

    I use Glances (with Homepage), and i can't show the correct size of the cache pool :

    361286456_Capturedecran2024-02-11a18_41_08.png.cfb1fef91265a38f4e7dde67c623d046.png

    1408033121_Capturedecran2024-02-11a18_43_22.png.45699e424b122301c4f4091dc3f736b9.png

     

    It seems to be reported as the df command line.

     

    Really not a big deal, but maybe i can use this to learn something more.

     

    Thanks all,

     

    K.

  3. Hello,

     

    For some folder, i have lines like

    cd..t...... StableDiffusion/Models/
    cd..t...... StableDiffusion/Models/codeformer/
    cd..t...... StableDiffusion/Models/controlnet/
    cd..t...... StableDiffusion/Models/embeddings/

     

    Is it possible to know what it means ?

    I can't find.

     

    Thanks!

  4. 10 hours ago, Stupot said:

    I guess my next question then, what steps would one take to free up space on the disk? Can you just start deleting older folders?

     

    I configure the script like that :

    backup_jobs=(
      # source                                  # destination
      (...)
      "[email protected]:/mnt/user/Medias"   "/mnt/user/Yggdrasil/Medias"
      (...)
    )

     

    In File Explorer (Krusader here), it looks like :

    180102394_Capturedecran2023-10-22a12_24_51.thumb.png.1b408796c7d28f850202b81db94ba15e.png

     

    Every folder seems to have the same size, but the script do symlink, so if you check that, you can see it is incremental :

    root@Helheim:~# du -d1 -h /mnt/user/Yggdrasil/Medias | sort -k2
    13T     /mnt/user/Yggdrasil/Medias
    13T     /mnt/user/Yggdrasil/Medias/20231009_185050
    18G     /mnt/user/Yggdrasil/Medias/20231011_121936
    2.4G    /mnt/user/Yggdrasil/Medias/20231012_122623
    1.1G    /mnt/user/Yggdrasil/Medias/20231013_122558
    12G     /mnt/user/Yggdrasil/Medias/20231014_122649
    1.7M    /mnt/user/Yggdrasil/Medias/20231015_122126
    18G     /mnt/user/Yggdrasil/Medias/20231016_122350
    1.7M    /mnt/user/Yggdrasil/Medias/20231017_122821
    1.7M    /mnt/user/Yggdrasil/Medias/20231018_122338
    25G     /mnt/user/Yggdrasil/Medias/20231019_122321
    115G    /mnt/user/Yggdrasil/Medias/20231020_122923
    19G     /mnt/user/Yggdrasil/Medias/20231021_122356

     

    Just delete the old folder, and if the files were only present in that one, it will be deleted.

     

    K.

    • Like 1
  5. On 9/19/2023 at 9:54 PM, Stupot said:

    Does this script have checks in place so that when the destination is full, it begins deleting old backup data?

     

    Hello,

     

    I faced this situation a few weeks ago.

    I can't find the email error, but the short answer is : no.

    You will receive a notification saying "can't write on destination, disk is full".

     

    The script is pretty dumb (probably even for keeping backups, based only on date, not on unicity of file), but still, i love it and use it every day :)

     

    K.

  6. Hello,

     

    I use your Seafile docker for four months :

    1676395899_Capturedecran2023-08-16a19_23_12.png.91dd9dd1d29fc97938b3033391a02933.png

    917017829_Capturedecran2023-08-16a19_23_52.thumb.png.f10a6423022277e05dd7a8426a56b7fd.png

     

    I love it. Thanks !

     

    Right now, i use "Backup/Restore Appdata" to make a backup of the container, which require to stop Seafile.

    As i am the only user and store small amount of data, it took less than 10min, which i do overnight :

    925676483_Capturedecran2023-08-16a19_25_37.png.790f68e9ba4af4c97df3ef713995c254.png

     

    69026461_Capturedecran2023-08-16a19_32_10.png.11f6a0c7753ae9a1dcf175bbf4a51caf.png

     

    I think it is time to move forward.

    I would like to store more data, and also open it to my family, mainly to store photos/videos of my nephews, as no one in the family really understand the word "MAKE BACKUPS" or "A file in one place is a dead file".

     

    So, the total amount of data will grow, and i think stopping the container and make an archive of 1 Tb+ won't be a right solution.

     

    What is recommandation ?

     

    To give you some context :

    - appdata (and seafile container) is on a zfs pool :

    608873514_Capturedecran2023-08-16a19_36_53.thumb.png.ce68a7bcbcc0be6ad74976179a4615cf.png

     

    - I kept my Appdata backup on the array inside /mnt/disk5/Backup/UNRAID/appdata

     

    - Every day, a second UNRAID server is turn on and use this script to backup the main server :

    On the backup server, inside a array, i have a zfs disk waiting for some snapshots :

    1955125425_Capturedecran2023-08-16a19_40_02.thumb.png.d5c592fd6e5f106fd9ca22d1f3bcb1d5.png

     

    I think i can manage something with that, but i don't know if zfs snapshot or rsync is a good solution, as Seafile data can be updated at any time.

     

    Thanks you for your time, and for the container !

     

    K.

  7. Hello again.

     

    Ok, i see and i am convinced, so i installed rsync-server and follow your support thread.

     

    From the main server :

    root@Yggdrasil:~# rsync --dry-run --itemize-changes --archive -e 'ssh -p 5533' root@Yggdrasil:/mnt/user/system/ /tmp
    .d..tpog... ./
    cd+++++++++ docker/
    >f+++++++++ docker/docker.img
    cd+++++++++ libvirt/
    >f+++++++++ libvirt/libvirt.img

     

    From the backup server :

    root@Helheim:~# rsync --dry-run --itemize-changes --archive -e 'ssh -p 5533' [email protected]:/mnt/user/system/ /tmp
    .d..tpog... ./
    cd+++++++++ docker/
    >f+++++++++ docker/docker.img
    cd+++++++++ libvirt/
    >f+++++++++ libvirt/libvirt.img

     

    And if i test :
     

    root@Helheim:~# ssh -p 5533 [email protected]
    Welcome to Alpine!
    
    The Alpine Wiki contains a large amount of how-to guides and general
    information about administrating Alpine systems.
    See <http://wiki.alpinelinux.org/>.
    
    You can setup the system with the command: setup-alpine
    
    You may change this message by editing /etc/motd.

     

    I guess it is working.

     

    Back to the script :
     

    # user-defined rsync command
    # alias rsync='sshpass -p "mypassword" rsync -e "ssh -o StrictHostKeyChecking=no"'
    
    # user-defined ssh command
    # alias ssh='sshpass -p "mypassword" ssh -o "StrictHostKeyChecking no"'

     

    My guess is that i need to edit these lines.

    But right now, ssh connection did not user any specific port, so it will "ssh normally" in the server, right ?

     

    How do i add the port, in order to user the read-only path ?

     

    I think it is :
     

    # user-defined rsync command
    alias rsync='rsync -e "ssh -p 5533 -o StrictHostKeyChecking=no"'
    
    # user-defined ssh command
    alias ssh='ssh -p 5533 -o StrictHostKeyChecking=no'

     

    Thanks

     

    K.

  8. 18 minutes ago, mgutt said:

    You could mount the main server in the Backup Server and execute the script on the Backup Server. By that hardlinks will work.

     

    Hello,

     

    I added the script on the backup server with :
     

    # #####################################
    # Settings
    # #####################################
    
    # backup source to destination
    backup_jobs=(
      # source                          # destination
      "[email protected]:/mnt/user/Public"                  "/mnt/user/Yggdrasil/Public"
    )
    
    
    (...)
    
    
    # user-defined rsync command
    alias rsync='sshpass -p "mystrongpassword" rsync -e "ssh -o StrictHostKeyChecking=no"'
    
    # user-defined ssh command
    alias ssh='sshpass -p "mystrongpassword" ssh -o "StrictHostKeyChecking no"'

     

    And it seems to work :

    root@Helheim:/boot/extra# du -d1 -h /mnt/disk3/Yggdrasil/Public | sort -k2
    1.2M    /mnt/disk3/Yggdrasil/Public
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_194104
    4.0K    /mnt/disk3/Yggdrasil/Public/20230811_194204
    4.0K    /mnt/disk3/Yggdrasil/Public/20230811_194225

     

    Thank you for the quick answer !

     

    K.

  9. Hello,

     

    I made some test with the folder Public but hardlink seems broken :

    root@Helheim:/mnt/disk3/Yggdrasil/Public/20230811_171609# ls
    20230811_171609.log*  log-cut.txt*  log.txt*
    
    root@Helheim:/mnt/disk3/Yggdrasil/Public/20230811_174553# ls
    20230811_174553.log*  log-cut.txt*  log.txt*
    
    
    root@Helheim:~# du -d1 -h /mnt/disk3/Yggdrasil/Public | sort -k2
    6.9M    /mnt/disk3/Yggdrasil/Public
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_171609
    (...)
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_174553

     

    I use Unassigned Devices to mount a SMB share (which is another local UNRAID server) :

    2037143892_Capturedecran2023-08-11a17_41_31.png.974348b1fbefd032c9860c10c2a4b062.png

     

    Then, i run the script with :

    # backup source to destination
    backup_jobs=(
      # source                          # destination
      "/mnt/user/Public"                "/mnt/remotes/HELHEIM_Yggdrasil/Public"
    )

     

    The destination folder is "disk3" formatted as BTRFS :

    787495084_Capturedecran2023-08-11a17_44_20.thumb.png.fd96cb5b15920a63a78d4dd07ec51e82.png

     

    775546977_Capturedecran2023-08-11a17_45_26.png.3a586de418fe2b5e0f8f8e16fea24076.png

     

    Here is the log when i run the script :

    Script location: /tmp/user.scripts/tmpScripts/00 - Backup Yggdrasil to Helheim/script
    Note that closing this window will abort the execution of this script
    # #####################################
    last_backup: '20230811_173934/'
    Create incremental backup from /mnt/user/Public to /mnt/remotes/HELHEIM_Yggdrasil/Public/20230811_174553 by using last backup /mnt/remotes/HELHEIM_Yggdrasil/Public/20230811_173934/
    created directory /mnt/remotes/HELHEIM_Yggdrasil/Public/.20230811_174553
    cd..t...... ./
    cf...p..... .DS_Store
    cf...po.... log-cut.txt
    cf...pog... log.txt
    
    Number of files: 5 (reg: 3, dir: 2)
    Number of created files: 0
    Number of deleted files: 0
    Number of regular files transferred: 0
    Total file size: 1.19M bytes
    Total transferred file size: 0 bytes
    Literal data: 0 bytes
    Matched data: 0 bytes
    File list size: 0
    File list generation time: 0.001 seconds
    File list transfer time: 0.000 seconds
    Total bytes sent: 183
    Total bytes received: 113
    
    sent 183 bytes received 113 bytes 197.33 bytes/sec
    total size is 1.19M speedup is 4,017.89
    File count of rsync is 5
    Make backup visible ...
    ... through local mv
    renamed '/mnt/remotes/HELHEIM_Yggdrasil/Public/.20230811_174553' -> '/mnt/remotes/HELHEIM_Yggdrasil/Public/20230811_174553'
    # #####################################
    Clean up outdated backups
    Keep daily backup: 20230811_174553
    Keep multiple backups per day: 20230811_173934
    Keep multiple backups per day: 20230811_173509
    Keep multiple backups per day: 20230811_172110
    Keep multiple backups per day: 20230811_171727
    Keep multiple backups per day: 20230811_171609

     

    In my case hardlink seems broken (i did not edit the content of Public) :

    root@Helheim:~# du -d1 -h /mnt/disk3/Yggdrasil/Public | sort -k2
    6.9M    /mnt/disk3/Yggdrasil/Public
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_171609
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_171727
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_172110
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_173509
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_173934
    1.2M    /mnt/disk3/Yggdrasil/Public/20230811_174553

     

    Do you know how i can troubleshoot the situation ?

     

    Thanks,

     

    K.

  10. Hello,

     

    I'm setting up a second unraid server whose sole role is to back up my main one.

     

    I recently discovered that btrfs can do snapshots and that a plugin exists (https://forums.unraid.net/topic/114600-plugin-snapshots/).

    This would be perfect for my pool disks.

     

    However, my array is in xfs and is not seen by the plugin.
    So I came across this script that lets you make incremental rsync backups:

     

    And finally, I've switched my "appdata" disk to zfs, so I figure I can host a zfs disk on the backup server and transfer the snapshots to it.

     

     

    So in the end I'll have :
    - array xfs => incremental script rsync
    - btrfs pool : Snapshots plugin
    - zfs pool: zfs snapshots on a zfs disk.

     

    This seems a bit complicated to me, do you have any feedback on the best way to build an unraid incremental backup server?

     

    Thanks

     

    K.

     

  11. That is pretty strange, in the GUI, the value is greyed, meaning i did not change it :

    678639767_Capturedecran2023-07-15a19_27_37.png.c311c4e0f0b3e37bec99033b10ad7c36.png1643146459_Capturedecran2023-07-15a19_26_59.png.0faf7fa63d0fcbc551462bc069279544.png915261041_Capturedecran2023-07-15a19_26_25.thumb.png.33909a9aaa5ace7f5ff8a82c79733daa.png

     

    I changed yesterday from 90/95% to 0/0 (globally, did not touch per disk setting).

     

    Nothing changed : I still received two notifications ("Disk 6 is high on usage (88%)" and after a minute " Disk 6 returned to normal utilization level") twice a day (at 6 in the morning and 10 in the evening, french time).

     

    I can easily activate these notifications by putting a disk above 70%, even if the global setting is 90/95% or since yesterday 0/0.

    1212524023_Capturedecran2023-07-15a19_35_12.thumb.png.3fd334c4c2291ab309d942fcbcef5883.png

     

    1720703096_Capturedecran2023-07-15a19_35_31.thumb.png.e96cc4f2e15ed4ebc813d36fdcfd7471.png

     

    So it seems to me, that UNRAID completely ignore my settings and stick to 70% (default setting ?).

     

    If you tell me that in the diag, SSDs have custom limit, and from the GUI, i did not see that, maybe there is something to investigate.

     

    K.

  12. Indeed disk1 (HDD) was set to 90/95.

    I deleted the values, and now, all disks (8 HDDs array + 4 SSDs pool) are supposed to follow the general rule.

    Here the new diag :

    yggdrasil-diagnostics-20230715-1728.zip

     

    But it is still not 70% (i did a lot of tries, the notification kick at 70% no matter what config i choose).

    disk6 generating the notification is at 88% right now, so still under the 90/95 of disk1, and still do not follow the "do not send notification".

     

    My goal is to keep notification 90/95%, not disabled it.

    I just ended up trying to disable it, in order to not receive the false notification.

     

    I will have the evening notification in few hours, if i did not receive it, i will try putting back my 90/95 settings.

     

    But the main issue here is : notification kick at 70% no matter what threshold i define.

     

    Thanks

     

     

    --

    EDIT

    The issue continue, still receive notifications :

    868937442_Capturedecran2023-07-15a21_58_40.thumb.png.709ef94242cab508d890d80b4b4b54c7.png

  13. Hello,

     

    I'm currently building a second UNRAID (backup) server and I'd like to solve a "bug" I have with the current one.

     

    If the disk usage exceeds 70%, I receive an email, then, the next minute, a second email saying that in fact no, the limit has not been exceeded.

     

    I know this thread exists:


    But obviously, the last message was from me, more than a year ago, and the situation hasn't improved with UNRAID v6.12

     

    In Setting / Disk Settings, i have disable notification :

    1503417117_Capturedecran2023-07-15a15_04_56.thumb.png.dd93cfafcc969b81cbe6dde9878ae142.png

     

    For the disk concerned, also :

    153463039_Capturedecran2023-07-15a15_05_41.thumb.png.2b0d32b277c2981138b55464fd1c1cdb.png

     

    I've also checked the /boot/config/plugins/dynamix/dynamix.cfg file:

    646573954_Capturedecran2023-07-15a15_07_52.thumb.png.a029e7ca72e3359d68156fbe2c93f855.png
    However, I still receive two e-mails, twice a day, informing me that the limit has been exceeded :

    384979569_Capturedecran2023-07-15a15_12_53.thumb.png.a16e97eabf3f44cc691a0675808d3f48.png
    I've also tried setting 90% and 95%, but still get an e-mail as soon as it exceeds 70%.

     

    Would it be possible for someone to help me investigate the problem?

    Here the diagnostic :

    yggdrasil-diagnostics-20230715-1515.zip

     

    Thank you very much,

    K.

     

    • Upvote 1
  14. Hello all,

     

    Thanks for the replies.

     

    Today, i try, same setup but after BIOS update ("test1") : same issue.

    After BIOS update, but with IOCrest card ("test2") : same issue.

     

    So... i think i should give up.

     

    By any chance, JorgeB, do you know some add-on controller with PCIe 1x ?

    I only see 4x, and there is no more 4x on my motherboard.

     

    K.

    yggdrasil.local - test1.png

    yggdrasil.local - test2.png

    yggdrasil-diagnostics-20230606-2009-test1.zip yggdrasil-diagnostics-20230606-2021 - test2.zip

×
×
  • Create New...