Jump to content

Iormangund

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by Iormangund

  1. Thanks, good to know encrypted is untested. Will tread carefully. It's a btrfs raid 6 array of 4tb disks I use as a scratch disk and steam/gaming library (awesome load times), nothing that isn't easily replaced and I don't waste space backing it up. If it was anything important I sure as hell wouldn't use btrfs 5/6 😆 Was more of a hypothetical really, nothing on there I need to be immutable. Thanks, good idea about setting immutable on external backups, must remember to do that next time I do a cold storage backup.
  2. Ah ok. Guess I will have to wait to use it properly. In the process of encrypting a 24x8tb disk array that is almost full so everything is being scattered by unbalance all over the place as I empty a disk at a time. Going to need some reorganising when that's all done and then can safely immute my files. I have a unassigned devices btrfs array mounted at /mnt/disks/btrfs_share, can the script be used on a share outside the array, modified to do so, or would I just be better off learning about chattr and doing it manually?
  3. Nice work on the script, great way of protecting files. I was wondering if you can set a share, not a disk, immutable using this script how does that effect the file on the disk level? For instance if I was to use unBalance to scatter/gather files across disks. I'm not exactly clear on how Unraid treats disks -> shares. Hardlinks? Would the 'real' file on disk be immutable or just the linked one on the share? (As a side note, I got pretty lucky on timing as I only realised today I had nothing setup for ransomware protection on my server, cheers!)
  4. I have some unassigned drives set in a btrfs pool mounted as a share. Has been working perfectly until I applied the recent updates at which point the drives will no longer auto mount or manual mount through the ui. This is the log error I get when attempting to mount with plugin: Server kernel: BTRFS error (device sdj1): open_ctree failed Server unassigned.devices: Error: shell_exec(/sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/btrfspool' 2>&1) took longer than 10s! Server unassigned.devices: Mount of '/dev/sdj1' failed. Error message: command timed out Server unassigned.devices: Partition 'DISK' could not be mounted... Mounting works as normal when done through terminal using commands: mkdir /mnt/disks/btrfspool /sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/btrfspool I assume this is due to the changes made around update "2019.11.29a" where timeout was added? Is it possible to change the timeout or do a check for btrfs pools and extend the timeout so auto mount works again? Is there a fix that I can manually apply to get it working again the same way as before until an update comes out?
  5. Ok, doing an unbalance op atmo. when that's done in a day or so ill do some testing. Btw with my previous comment, I had spun up all drives and disabled spin down delay (was 30mins before) and had manually enabled turbo write before enabling the plugin, so even if the gui was reporting wrongly, the drives 'should' have been spun up.
  6. Nice plugin, however it always get's the number of spun down disks wrong. Even with polling under 10 seconds. Ie with all disks spun up, invoke setting of 2 and poll of 5 seconds it reported 6 disks spun down and disabled turbo, then 2 disks next poll, then 8, then 1 and enabled turbo, then disabled and reported 4 etc etc etc, all while every disk was spun up and active. (15 disk array btw) Wonder if it's something to do the plugin not polling from the sas hba properly? Anyway, look forward to when it's integrated into unraid or fixed. Keep up the good work
  7. I meant 2 weeks for my current migration (gone up now to 16 days with parity check added, sigh). Agree that a week should be fine for 20TB.
  8. if you enable rsync server on the syno then you can just use a command from synology box like: rsync -havPW --stats /Sourcedir/ rsync://tower/Destinationdir/
  9. MC is built into unraid. Though personally for transferring that much data I am using rsync, know it adds a little overhead but least you can resume and there is less risk of corrupted files.
  10. Whatever way you are doing it, it's gonna take a long time. My rough maths show i've got over 2 weeks of transferring left (plus downtime for issues, like having to check parity for 20 hours after unclean shutdown). If you are feeling brave, I would suggest having a go at doing the ubuntu mount and vm host share, will be the fastest way to do it and least likely to risk messing up any of your synology data.
  11. If you have enough free sata/sas connections you could connect the Synology drives to unraid and run xpenology in a VM. Then create a second nic using host only, mount the synology share in unraid (or other way round) and transfer via the host only nic. Found that so far to be the fastest way (no network bottleneck). I'm new to Unraid and migrating 40tb from my xpenology box to it and short of any better alternatives this is what I have been doing. Would add that you need to make sure you use the appropriate boot loader for your current synology version, if you are fully updated to 6.2 (or whatever the latest is) then this way isn't an option. I also attempted mounting the raid array in a Ubuntu docker to take advantage of host share but without using pcie passthrough (current cpu has no vt-d) that didn't work. If you have vt-d you could possibly pass that to ubuntu then mount the raid array in ubuntu and transfer in that using vm share, that would be the very fastest way to do it. Guide for mounting synology in ubuntu: https://www.synology.com/en-uk/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC
×
×
  • Create New...