Jump to content

Ancan

Members
  • Posts

    55
  • Joined

  • Last visited

Posts posted by Ancan

  1. 20 hours ago, JorgeB said:

    Some can work better than others, but other than the performance penalty there can be disk timeouts, in some causes causing the disks to drop, so they should be avoided.

     

    Right, I checked around a bit myself, and one showstopper is that if a disk fails, there's a chance that the entire SATA-channel stops responding and makes the multiplexed drives fail as well. Seems like risk of data loss, so I'll be returning this card.

  2. After ordering a new controller I think I might have made too littile research first. I'm currently using a LSI 8-port HBA, which have started to give errors on my partity drive (even after replacing both cable and drive) so I thought I'd replace it and go for a plain SATA-card instead since they are supposed to have much lower power consumption.

     

    However, looking at the order now, I see that the 10-port card is using a JMB575 port multiplier and just read that those are not recommended for unRAID. So now I ask, what may go wrong with this card? Am I loosing performance or are there more serious downsides?

  3. Hi all,

    After swapping my cache-drive for a larger one, I did reinstall of the Deconz container from "old apps" and restored appdata-folder from my backup. After this I've lost control over all my zigbee devices.

     

    Checking the logs there's lots of "error ASSPDE-DATA.confirm: 0xE1 on task".

     

    I've also noted that in the container, "DECONZ_DEVICE" is set to "0", but in the config screen it says "set same as device", which for me is /dev/ttyACM0. Is this expected?

    It seems to find the USB-device, because it confirms that the firmware is up to date.

  4. The first Mac just joined the household, and I've been trying to connect to shares on the unRAID server from it. All works fine except hidden shares (i.e. share name ends with "$"). I have no problem mounting hidden shares that are on a Windows machine from the Mac, nor mounting the hidden unRAID shares from a Windows machine, which leads me to suspect there's something with the samba setup on unRAID.

     

    I've monitored the traffic while connecting and it seems I get STATUS_BAD_NETWORK_NAME.

     

    24  1.892387    192.168.y.y 192.168.x.x SMB2    414 Create Request File: ;GetInfo Request FILE_INFO/SMB2_FILE_ALL_INFO;Close Request
    25  1.894348    192.168.x.x 192.168.y.y SMB2    566 Create Response File: ;GetInfo Response;Close Response
    27  1.894602    192.168.y.y 192.168.x.x SMB2    414 Create Request File: ;GetInfo Request FILE_INFO/SMB2_FILE_ALL_INFO;Close Request
    28  1.896113    192.168.x.x 192.168.y.y SMB2    566 Create Response File: ;GetInfo Response;Close Response
    30  2.083146    192.168.y.y 192.168.x.x SMB2    208 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\<sharename>$
    31  2.085822    192.168.x.x 192.168.y.y SMB2    151 Tree Connect Response, Error: STATUS_BAD_NETWORK_NAME
    33  2.085957    192.168.y.y 192.168.x.x SMB2    190 Tree Connect Request Tree: \\192.168.x.x\<SHARENAME>$
    34  2.088408    192.168.x.x 192.168.y.y SMB2    151 Tree Connect Response, Error: STATUS_BAD_NETWORK_NAME
    36  2.089989    192.168.y.y 192.168.x.x SMB2    198 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\IPC$
    37  2.091087    192.168.x.x 192.168.y.y SMB2    158 Tree Connect Response
    39  2.091278    192.168.y.y 192.168.x.x SMB2    240 Ioctl Request FSCTL_DFS_GET_REFERRALS, File: \192.168.x.x\<sharename>$
    40  2.093056    192.168.x.x 192.168.y.y SMB2    151 Ioctl Response, Error: STATUS_NOT_FOUND
    42  2.093249    192.168.y.y 192.168.x.x SMB2    138 Tree Disconnect Request
    43  2.094102    192.168.x.x 192.168.y.y SMB2    146 Tree Disconnect Response
    45  2.095655    192.168.y.y 192.168.x.x SMB2    198 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\IPC$
    46  2.097853    192.168.x.x 192.168.y.y SMB2    158 Tree Connect Response
    48  2.098042    192.168.y.y 192.168.x.x SMB2    240 Ioctl Request FSCTL_DFS_GET_REFERRALS, File: \192.168.x.x\<sharename>$
    49  2.100083    192.168.x.x 192.168.y.y SMB2    151 Ioctl Response, Error: STATUS_NOT_FOUND
    51  2.100296    192.168.y.y 192.168.x.x SMB2    138 Tree Disconnect Request
    52  2.100979    192.168.x.x 192.168.y.y SMB2    146 Tree Disconnect Response
    54  2.102956    192.168.y.y 192.168.x.x SMB2    198 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\IPC$
    55  2.104394    192.168.x.x 192.168.y.y SMB2    158 Tree Connect Response
    57  2.104578    192.168.y.y 192.168.x.x SMB2    220 Ioctl Request FSCTL_DFS_GET_REFERRALS, File: \192.168.x.x
    58  2.105260    192.168.x.x 192.168.y.y SMB2    151 Ioctl Response, Error: STATUS_NOT_FOUND
    60  2.105402    192.168.y.y 192.168.x.x SMB2    138 Tree Disconnect Request
    61  2.106209    192.168.x.x 192.168.y.y SMB2    146 Tree Disconnect Response
    63  2.258212    192.168.y.y 192.168.x.x SMB2    446 Create Request File: :AFP_AfpInfo;Read Request Len:60 Off:0;Close Request
    64  2.260541    192.168.x.x 192.168.y.y SMB2    318 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND;Read Response, Error: STATUS_FILE_CLOSED;Close Response, Error: STATUS_FILE_CLOSED
    66  2.260991    192.168.y.y 192.168.x.x SMB2    446 Create Request File: :AFP_AfpInfo;Read Request Len:60 Off:0;Close Request
    67  2.262606    192.168.x.x 192.168.y.y SMB2    318 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND;Read Response, Error: STATUS_FILE_CLOSED;Close Response, Error: STATUS_FILE_CLOSED
    69  2.263692    192.168.y.y 192.168.x.x SMB2    414 Create Request File: ;GetInfo Request FS_INFO/FileFsSizeInformation;Close Request
    70  2.264944    192.168.x.x 192.168.y.y SMB2    486 Create Response File: ;GetInfo Response;Close Response
    72  2.265747    192.168.y.y 192.168.x.x SMB2    414 Create Request File: ;GetInfo Request FS_INFO/FileFsSizeInformation;Close Request
    73  2.267337    192.168.x.x 192.168.y.y SMB2    486 Create Response File: ;GetInfo Response;Close Response

     

  5. 7 minutes ago, guy.davis said:

     

    Hi, sorry to hear you're having issues with Machinaris.  We can probably solve this faster in the Discord, however, what does `flax farm summary` output?  How about your Blockchains status in WebUI for Flax?  Are you still syncing?  Another way to check this is `flax show -s`  Hope this helps.

     

    Appreciate your support. I'll post in the flax "farming support"

  6. On 10/20/2021 at 4:32 PM, guy.davis said:

     

    Hi, please ensure that you have copied the mnemonic.txt over and restarted the Flax container:

    cp /mnt/user/appdata/machinaris/mnemonic.txt /mnt/user/appdata/machinaris-flax/mnemonic.txt 

    Then take a look at /mnt/user/appdata/flax/mainnet/log files: debug.log and init.log.  Hope this helps!

     

    Late reply, but I still can't get flax to work. I've copied the mnemonic.txt to the correct placement, and also deleted the database file for good measure, but there's still zero plots. Running "flax plots check" from the console finds them all though, and "flax keys show --show-mnemonic-seed" shows the correct mnemonic.

    In the GUI under "workers" the proper plots directories are shown with correct disk space usage reported, but still the summary shows "0 Total Plots", and lots of alerts about "Your harvester appears to be offline".

     

    init.log just shows this:

     

    Flax directory /root/.chia/flax/mainnet
    /root/.chia/flax/mainnet already exists, no migration action taken

     

    debug.log is attached.

     

     

     

     

     

     

    debug.log

  7. Can't seem to get Flax to find my plots after moving it to the separate docker. All /plots1, /plots2 and so on are mapped, listed under plots_dir, and I can see all plots from the docker console. Farming is "active" but still zero plots.  Nothing obvious in the logs.

     

    Not sure if it's relevant, but the harvester process is shown as "defunct" in ps:

     

    [flax_harvester] <defunct>

     

    I believe I followed the guide properly, but must have missed something.

     

     


  8. 0       1       *       *       0       /mnt/disk6/backup/weekly-backup.sh
    0       1       *       *       1-6     /mnt/disk6/backup/daily-backup.sh

     

    As I wrote, this have been running just fine before.

  9. Hi all,

    I've got a cronjob to backup all my dockers and VM's, but it's stopped running recently.

    It's defined in /boot/config/plugin/dynamix/backup-jobs.cron, and I've not made any changes to this file. Syslog doesn't show anything relevant.

     

    I can run it manually without issues, but it just doesn't run as scheduled. Running Unraid 6.9.0-beta25.

     

    Anyone got any ideas?

  10. 2 hours ago, dee31797 said:

    When you install it on Unraid it's set automatically but you can overwrite it with the variable "-e TZ="America/New_York"

    Thanks for the fast reply! Still doesn't work though. Now I've got two "TZ" in the docker startup parameters, and while "Europe/Stockholm" is what I entered "Europe/Berlin" is the same zone, so it should work. Can it be some daylight savings thing? I usually like troubleshooting things myself but this is a bit urgent.

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='motioneye' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'UID'='1000' -e 'GID'='100' -e 'TZ'='Europe/Stockholm' -p '8765:8765/tcp' -v '/mnt/user/cam':'/var/lib/motioneye':'rw' -v '/mnt/cache/appdata/motioneye':'/etc/motioneye':'rw' 'djaydev/motioneye'

  11. 1 minute ago, johnnie.black said:

    It should, re: snapshots, try to keep as few snapshots as possible, the more snapshots you have the slower the filesystem will become, low single digits are recommended.

    I'm keeping about 20 (four weekly and a bunch of daily). Perhaps I'm expecting too much to imitate the setup I've done on the NetApp's at work, but expected a bit more from btrfs is that is the case.

  12. Hi all,

    I've got an annoying issue with BTRFS where accessing the file system sometimes stalls, snapshot transfers takes forever/never completes and for two nights in a row, a frozen Unraid-box in the morning.

     

    In the syslog, the only sign of anything abnormal is multiple "kernel: BTRFS info (device md6): the free space cache file (nnnnnnnn) is invalid, skip it". It's always the same disk, which is the disk I use to store backup-snapshots of VM's/containers which are transferred from the cache-drive nightly.  There's no hardware related timeouts or similar which I'd suspect if there was a hardware issue.

     

    I assume it's some filesystem-issue, but "btrfs check" shows no errors, "scrub" finds no errors and mounting the disk with "clear_cache" option makes no difference.

     

    I'm currently evacuating the disk so I can reformat it but wanted to ask if anyone has any idea what the problem might be and if so, if it's possible to repair.

     

    Edit: On 6.8.2 now, but have been on the beta that had the 5.x kernel until today.

     

  13. Hi guys!

    I know there's probably a lot of LCC questions, but isn't this one a bit strange?

     

    When moving from my Synology to Unraid, I bought two Seagate IronWolf ST4000VN008-2DR166. They've been running about two months now, and already have 15500 and 30140 LCC. My two WD RED's that I moved over from the Syno are years old (oldest on 49k hours), but they have only 9000 and 5500.

     

    This seems normal?

     

     

  14. ...and here's an example on how I'm using it.

     

    This is my "daily-backup.sh", that is scheduled daily at 01:00. It snapshots and backs up all VM's and all directories under appdata.

     

    #!/bin/sh
    
    # VMs
    
    KEEP_SRC=48h
    KEEP_DST=14d
    
    SRC_ROOT=/mnt/cache/domains
    DST_ROOT=/mnt/disk6/backup/domains
    
    cd /mnt/disk6/backup
    
    virsh list --all --name | sed '/^$/d' | while read VM; do
            /mnt/disk6/backup/snapback.sh -c -s $SRC_ROOT/$VM -d $DST_ROOT -ps $KEEP_SRC -pd $KEEP_DST -t daily
    done
    
    
    # AppData
    
    KEEP_SRC=48h
    KEEP_DST=14d
    
    SRC_ROOT=/mnt/cache/appdata
    DST_ROOT=/mnt/disk6/backup/appdata
    
    for APP in $SRC_ROOT/*/; do
            /mnt/disk6/backup/snapback.sh -c -s $APP -d $DST_ROOT -ps $KEEP_SRC -pd $KEEP_DST -t daily
    done

     

    • Like 1
  15. Hi all,

    Just spent the day creating a somewhat simple script for creating snaphots and transferring them to another location, and thought I'd throw it in here as well if someone can use it or improve on it.

    Note that it's user-at-your-own risk. Could probably need more fail-checks and certainly more error checking, but it's a good start I think. I'm new to btrfs as well, so I hope I've not missed anything fundamental about how these snapshots works.

     

    The background is that I wanted something that performs hot backups on my VM's that lives on the cache disk, and then moves the snapshots to the safety of the array, so that's more or less what this does, with a few more bells and whistles.

     

    - It optionally handles retention on both the primary and secondary storage, deleting expired snapshots.

    - Snapshots can be "tagged" with a label, and the purging of expired snapshots only affects the snapshots with this tag, so you can have different retention for daily, weekly and so on.

    - The default location for the snapshots created on the primary storage is a ".snapshots" directory alongside the subvolume you are protecting. This can however be changed, but no check is currenlty performed that it's on the same volume as the source subvolume.

     

    To use it there's some prerequisites:

    - Naturally both the source and destination volumes must be brtfs.

    - Also, all things you want to protect must be converted to a brtfs subvolume if they are not.

    - Since there's way to manage btrfs subvolumes that span multiple disks in unRAID, the source and destinations must be specified by disk path (/mnt/cache/..., /mnt/diskN/...).

     

    Note that this is a very abrubt way to protect VM's, with no VSS integration or other means of flushing guest OS file system. It's however not worse than what I've been doing at work with NetApp/vmware for years, and I've yet to see a rollback that didn't work out just fine there.

     

    Below is the usage header quoted, and the actual script is attached.

     

    Example of usage:

     

    ./snapback.sh --source /mnt/cache/domains/pengu --destination /mnt/disk6/backup/domains --purge-source 48h --purge-destination 2w -tag daily

     

    This will create a snapshot of the virtual machine "pengu" under /mnt/cache/domains/.snapshots, named something like [email protected].

    It will then transfer this snapshot to /mnt/disk6/backup/domains/[email protected].

    The transfer will be incremental or full depending on if a symbolic link called "pengu.last" exists in the snapshot-directory. This link always points to the latest snapshot created for this subvolume.

    Any "daily" snapshots on the source will be deleted if they are older than 48 hours, and any older than two weeks will be deleted from the destination.

     

    # snapback.sh
    #
    # A.Candell 2019
    #
    # Mandatory arguments
    #       --source | -s
    #               Subvolume that should be backed up
    #
    #       --destination | -d
    #               Where the snapshots should be backed up to.
    #
    # Optional arguments:
    #
    #       --snapshot-location | -s
    #               Override primary storage snapshot location. Default is a directory called ".snapshots" that is located beside the source subvolume.
    #
    #       --tag | -t
    #               Add a "tag" on the snapshot names (for example for separating daily, weekly).
    #               This string is appended to the end of the snapshot name (after the timestamp), so make it easy to parse and reduce the risk of
    #               mixing it up with the subvolume name.
    #
    #       --create-destination | -c
    #               Create destination directory if missing
    #
    #       --force-full | -f
    #               Force a full transfer even if a ".last" snapshot is found
    #
    #       --purge-source <maxage> | -ps <maxage>
    #               Remove all snapshots older than maxage (see below) from snapshot directory. Only snapshots with specified tag is affected.
    #
    #       --purge-destination <maxage> | -pd <maxage>
    #               Remove all snapshots older than maxage (see below) from destination directory. Only snapshots with specified tag is affected.
    #
    #       --verbose | -v
    #               Verbose mode
    #
    #       --whatif | -w
    #               Only echoes commands, not executing them.
    #
    # Age format:
    #       A single letter suffix can be added to the <maxage> arguments to specify the unit used.
    #       NOTE: If no suffix is specified, hours are assumed.
    #               s = seconds (5s = 5 seconds)
    #               m = minutes (5m = 5 minutes)
    #               h = hours (5m = 5 hours)
    #               d = days (5d = 5 days)
    #               w = weeks (5w = 5 weeks)

     

     

     

    snapback.sh

    • Like 3
    • Thanks 1
  16. Hi,

    I just finished converting all my drives to btrfs, for no other reason that I want to use the snapshot feature. Mainly I want to perform hot incremental backups of my (cache-stored) VM's and send the snapshots to a backup directory on the array.

     

    So I was a bit disappointed when I see I can't use btrfs features on the user shares. Creating a subvolume on a user-share just gives the error "ERROR: not a btrfs filesystem". I guess the driver providing the unified name space doesn't pass through these things.

     

    Are there any workarounds for this, except using a dedicated disk for the backups and access it directly?

     

    Thanks,

    Anders

     

     

     

     

  17. 52 minutes ago, bonienl said:

    A parity is done to ensure everything is still correct (no assumptions), even when there are zeroes.

    So, in that >4GB space, what's the parity going to be compared to? There's no other data to perform a checksum with. "Let's see, I take the bit from this disk, and XOR it with..., well nothing at all, and then see if it's still the same value"?

     

    If you do checksum on the unused space, and for some reason a zero have turned into a one, a parity check won't catch that anyway, because 1 XOR nothing is still 1.

    Edit: Ok, the check is probably this way and then it'll put zeroes on the extra space:

        Start with zero, for each data disk, XOR the value, then compare/update to the parity disk.

     


    Sorry for being stubborn. It's no biggie really, but I still can't grasp the reason and it bugs me.

×
×
  • Create New...