Jump to content

Iormangund

Members
  • Posts

    34
  • Joined

  • Last visited

Posts posted by Iormangund

  1. 12 hours ago, binhex said:

    absolutely!, think of running this script as the icing on the cake, the very last thing to do once everything is in its place :-), just as a quick heads up i have NOT tested this script against encrypted files, i would suspect it will run just fine but please be cautious and try it on a small test share for starters before applying it to anything important.

     

    right now it doesnt support unassigned devices, only disks in the array, do you use unassigned devices for long term storage then?, and if so why not copy to the protected array instead where you have fault tolerance?.

    Thanks, good to know encrypted is untested. Will tread carefully.

    It's a btrfs raid 6 array of 4tb disks I use as a scratch disk and steam/gaming library (awesome load times), nothing that isn't easily replaced and I don't waste space backing it up. If it was anything important I sure as hell wouldn't use btrfs 5/6 😆

    Was more of a hypothetical really, nothing on there I need to be immutable.

     

    5 hours ago, jonathanm said:

    I would think that if you are using UD devices for offsite physical backups, you would want to apply the immutable attribute to keep your backup media extra safe when you are accessing it for recovery purposes.

     

    Thanks, good idea about setting immutable on external backups, must remember to do that next time I do a cold storage backup.

    • Like 1
  2. 1 hour ago, binhex said:

    no thats not possible, you cannot run chattr across user shares directly, it has to be done on a per disk basis.

     

    when you specify the shares to process, the script wil inspect each disk in turn to check for the top level folder that matches the share name, this makes up the view the user sees when they navigate /mnt/user/<share name>.

    Ah ok. Guess I will have to wait to use it properly. In the process of encrypting a 24x8tb disk array that is almost full so everything is being scattered by unbalance all over the place as I empty a disk at a time. Going to need some reorganising when that's all done and then can safely immute my files.

    I have a unassigned devices btrfs array mounted at /mnt/disks/btrfs_share, can the script be used on a share outside the array, modified to do so, or would I just be better off learning about chattr and doing it manually?

  3. Nice work on the script, great way of protecting files.

     

    I was wondering if you can set a share, not a disk, immutable using this script how does that effect the file on the disk level?
    For instance if I was to use unBalance to scatter/gather files across disks.

    I'm not exactly clear on how Unraid treats disks -> shares. Hardlinks?

     

    Would the 'real' file on disk be immutable or just the linked one on the share?

     

    (As a side note, I got pretty lucky on timing as I only realised today I had nothing setup for ransomware protection on my server, cheers!)

  4. I have some unassigned drives set in a btrfs pool mounted as a share. Has been working perfectly until I applied the recent updates at which point the drives will no longer auto mount or manual mount through the ui.

     

    This is the log error I get when attempting to mount with plugin:

     

    Server kernel: BTRFS error (device sdj1): open_ctree failed
    Server unassigned.devices: Error: shell_exec(/sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/btrfspool' 2>&1) took longer than 10s!
    Server unassigned.devices: Mount of '/dev/sdj1' failed. Error message: command timed out
    Server unassigned.devices: Partition 'DISK' could not be mounted...


    Mounting works as normal when done through terminal using commands:

    mkdir /mnt/disks/btrfspool

    /sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/btrfspool

     

    I assume this is due to the changes made around update "2019.11.29a" where timeout was added?

    Is it possible to change the timeout or do a check for btrfs pools and extend the timeout so auto mount works again?

     

    Is there a fix that I can manually apply to get it working again the same way as before until an update comes out?

  5. 5 hours ago, Squid said:

    The plugin utilizes hdparm to check the status of the drives which will be more accurate than what's reported with unRaid's GUI.

     

    You can always enable the additional debug logging, and then post your diagnostics here.  But don't leave it enabled as it will basically spam the log with the output from the various commands as it runs.

     

    Ok, doing an unbalance op atmo. when that's done in a day or so ill do some testing. Btw with my previous comment, I had spun up all drives and disabled spin down delay (was 30mins before) and had manually enabled turbo write before enabling the plugin, so even if the gui was reporting wrongly, the drives 'should' have been spun up.

  6. Nice plugin, however it always get's the number of spun down disks wrong. Even with polling under 10 seconds. Ie with all disks spun up, invoke setting of 2  and poll of 5 seconds it reported 6 disks spun down and disabled turbo, then 2 disks next poll, then 8, then 1 and enabled turbo, then disabled and reported 4 etc etc etc, all while every disk was spun up and active. (15 disk array btw)

     

    Wonder if it's something to do the plugin not polling from the sas hba properly?

     

    Anyway, look forward to when it's integrated into unraid or fixed. Keep up the good work

  7. 4 minutes ago, HellDiverUK said:

    2 weeks seems a very long time.  Last time I transferred 8TB it took about 2 days.  20TB is certainly doable in a week on gigabit.

    I meant 2 weeks for my current migration (gone up now to 16 days with parity check added, sigh). Agree that a week should be fine for 20TB.

    • Upvote 1
  8. Just now, JoshFink said:


    Appreciate it. My family would kill me with that much downtime though with all their shows and such. I have time. I'd just like to do it once and make it as efficient as possible. 

     


    Thanks.. I'll check this out. I'm not familiar with Midnight Commander but I'll do some research. 

    MC is built into unraid. Though personally for transferring that much data I am using rsync, know it adds a little overhead but least you can resume and there is less risk of corrupted files.

  9. Whatever way you are doing it, it's gonna take a long time. My rough maths show i've got over 2 weeks of transferring left (plus downtime for issues, like having to check parity for 20 hours after unclean shutdown).

     

    If you are feeling brave, I would suggest having a go at doing the ubuntu mount and vm host share, will be the fastest way to do it and least likely to risk messing up any of your synology data.

  10. If you have enough free sata/sas connections you could connect the Synology drives to unraid and run xpenology in a VM. Then create a second nic using host only, mount the synology share in unraid (or other way round) and transfer via the host only nic. Found that so far to be the fastest way (no network bottleneck). I'm new to Unraid and migrating 40tb from my xpenology box to it and short of any better alternatives this is what I have been doing. Would add that you need to make sure you use the appropriate boot loader for your current synology version, if you are fully updated to 6.2 (or whatever the latest is) then this way isn't an option.

     

    I also attempted mounting the raid array in a Ubuntu docker to take advantage of host share but without using pcie passthrough (current cpu has no vt-d) that didn't work. If you have vt-d you could possibly pass that to ubuntu then mount the raid array in ubuntu and transfer in that using vm share, that would be the very fastest way to do it.

     

    Guide for mounting synology in ubuntu: https://www.synology.com/en-uk/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC

×
×
  • Create New...