Jump to content

deeks

Members
  • Posts

    41
  • Joined

  • Last visited

Posts posted by deeks

  1. So with the raid5 setup I do need to check usage in stead of free space more than anything for my cache disks. Thanks for helping out!

     

    What I do still wonder is why my previous cache disk setup (2 ssd in raid1) did not start moving data to disk as it ran out of disk space. My cache folders were set to "prefer" in the share settings. 

     

    I would have thought that Unraid woud not further fill the cache pool a a certain point. It didn't....

  2. Hi Johnnie.black - thanks for your response. This is the output I get.  :

     

     

    Linux 4.19.88-Unraid.
    Last login: Sat Jan 25 10:22:33 +0100 2020 on /dev/pts/0.
    root@unRaid:~# btrfs fi usage -T /mnt/cache
    WARNING: RAID56 detected, not implemented
    WARNING: RAID56 detected, not implemented
    WARNING: RAID56 detected, not implemented
    Overall:
        Device size:                   1.31TiB
        Device allocated:                0.00B
        Device unallocated:            1.31TiB
        Device missing:                  0.00B
        Used:                            0.00B
        Free (estimated):                0.00B      (min: 8.00EiB)
        Data ratio:                       0.00
        Metadata ratio:                   0.00
        Global reserve:               16.00MiB      (used: 0.00B)

                 Data      Metadata  System              
    Id Path      RAID5     RAID5     RAID5    Unallocated
    -- --------- --------- --------- -------- -----------
     1 /dev/sdc1   1.00GiB 512.00MiB 32.00MiB   445.60GiB
     2 /dev/sde1   1.00GiB 512.00MiB 32.00MiB   445.60GiB
     3 /dev/sdb1   1.00GiB 512.00MiB 32.00MiB   445.60GiB
    -- --------- --------- --------- -------- -----------
       Total       2.00GiB   1.00GiB 64.00MiB     1.30TiB
       Used      768.00KiB 112.00KiB 16.00KiB     

     

    I wiped the 3 cache disks so yes, used space is almost nil. But not getting a correct free space indication is a bit of a bummer as one might run out of free space without prior warning I suppose.

  3. Hi guys,

     

    I would like to create a cache pool of 3ssd's of 480gb each in raid5 config. I have applied the -dconvert=raid5 -mconvert=raid1 command and expected about 960gb of free space using the carvox.org raid calulator. Unraid shows met 1,4Tb of free space howver. How do i know my raid setup is correct?

  4. Hi Dadarara - check out mobile apps that like Evernote scannable or even Apple Notes (standard IOS app) if you want easy scanning with reasonable quality. These cover 99% of my document scanning needs. Its easy to setup and use, super fast and uses built-in sharing to mail, filefolders or printer. But thats another topic as well ;-)

     

     

  5. Hi there,

     

    I have an rPI with CUPS and Brother HL-5240 laserprinter connected via USB that worked fine until the rPI died (rip). I decided to install the CUPS docker and hook up the printer to my Airport. CUPS sees the printer, installs it with the Gutenprint ppd. However CUPS does not print and returns status :"Unable to locate printer "Airport-Router.local".

     

    How can this be fixed?

  6. My tests with only one VM went fine. Be careful to check the results if you want to use the script with multiple VM's. 

     

    I have set the script to run every night, even though the server will not be running 24x7 in practice. It takes about 1,5 hours to create a zipped backup of my VM plus xml file and always leaves at least one backupfile in the folder.

     

    I have added the the script for those interested. Feel free to comment and improve.

    script

    • Like 2
  7. More fine tuning in the works ...

    • Tidy up of old backups was somewhat too rigid as it cleans out all backups older than a predefined number of days. The new script should leave at least one backup in place.
    • Fix a small personal niggle : changing the date/timestamp to european dateformat (yyyymmdd) since I keep mixing up months and days with US notation.

    First tests with small sample files went well. Will let the script run with production setting for a while and post back results in this thread.

  8. Also tweaked further to clean out old zip files in order to to fill up too much diskspace.

     

    Look for this piece of code (around line 1075):

    echo "information: finished attempt to backup "$vms_to_backup" to $backup_location."

    Then add this extra code directly beneath:

    # Send location where it wants to clean out the backed up files
    echo "information: cleaning out backups older than 2 days in location	" $backup_location/$vm/
    
    # Delete all files older than two 24hr periods
    find $backup_location/$vm/ -mtime +2 -exec rm -f {} \;

    I tested it with the -mmin command in stead of  -mtime in order speed up testing.

     

  9. On 10/03/2017 at 0:10 AM, giantkingsquid said:

    Thanks for the great script danioj, nice work :)

     

    For those wanting to reduce the size of the .img I modified the second rsync command, adding a zip and rm so that it's like this:
     

    
    rsync -av$rsync_dry_run_option "$disk" "$backup_location/$vm/$timestamp$new_disk" && zip -j "$backup_location/$vm/$timestamp".zip "$backup_location/$vm/$timestamp"* && rm "$backup_location/$vm/$timestamp"*.img "$backup_location/$vm/$timestamp"*.xml

     

    Not elegant but it seems to work ok.

     

     

    Hi there - this works well but have tweaked it slightly to include the name of the virtual machine in the filename.

     

    rsync -av$rsync_dry_run_option "$disk" "$backup_location/$vm/$timestamp$new_disk" && zip -j "$backup_location/$vm/$timestamp$vm".zip "$backup_location/$vm/$timestamp"* && rm "$backup_location/$vm/$timestamp"*.img "$backup_location/$vm/$timestamp"*.xml

     

    • Upvote 1
  10. Hi Squid,

     

    CA is working again for me. So it was an DNS issue and after all I guess that was out of my control. Pullout out my hair after after the CA reinstall and a lot of fiddling with the router did not bring it back to life. Thanks for looking into the issue!

     

    Deeks

×
×
  • Create New...