Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. read/write mount failed, so it was then mounted read only: Aug 15 16:56:46 HomeServer unassigned.devices: Mount drive command: /sbin/mount -t ntfs -o auto,async,noatime,nodiratime,nodev,nosuid,umask=000 '/dev/sde2' '/mnt/disks/Kevin' Aug 15 16:56:46 HomeServer unassigned.devices: Mount failed with error: ntfs-3g-mount: mount failed: Device or resource busy Aug 15 16:56:46 HomeServer unassigned.devices: Mounting ntfs drive read only. Aug 15 16:56:46 HomeServer unassigned.devices: Mount drive ro command: /sbin/mount -t ntfs -ro auto,async,noatime,nodiratime,nodev,nosuid,umask=000 '/dev/sde2' '/mnt/disks/Kevin'
  2. It's normal for the health report to fail during a rebuild/sync, it will turn healthy once it finishes.
  3. You should be able to write to any unassigned ntfs device, it would be better to post on the UD thread and make sure you post your diagnostics.
  4. It can still cause issues, since any data in RAM will not be snapshoted, though I did try several times to boot of a live VM snapshot and it always worked, the same way Windows will most times recover from a crash/power loss, but there could be issues, what I do is a live snapshot everyday of all VMs and try to to at least once a week turn them all off and do an offline snapshot, this way I have more options if I need to recover.
  5. These can also be useful while there's no GUI support to list and delete older snapshots, also with user scripts plugin. List existing snapshots for a share, I use just one disk, since all the other disks containing that share will have the same ones: #!/bin/bash btrfs sub list /mnt/disk1 Delete snapshots: #!/bin/bash #argumentDescription=Enter in the arguments #argumentDefault=Date for i in {1..28} ; do btrfs sub del /mnt/disk$i/snaps/TV_$1 done For example list will produce this: Script location: /tmp/user.scripts/tmpScripts/list sspaces snapshots/script Note that closing this window will abort the execution of this script ID 613 gen 16585 top level 5 path sspaces ID 614 gen 16580 top level 5 path snaps ID 3381 gen 4193 top level 614 path snaps/sspaces_daily_2018-07-01-070001 ID 3382 gen 4200 top level 614 path snaps/sspaces_daily_2018-07-02-070001 ID 3386 gen 4204 top level 614 path snaps/sspaces_daily_2018-07-03-070001 ID 3387 gen 4206 top level 614 path snaps/sspaces_daily_2018-07-04-070001 ID 3391 gen 4213 top level 614 path snaps/sspaces_daily_2018-07-05-070002 ID 3394 gen 4219 top level 614 path snaps/sspaces_daily_2018-07-06-070001 ID 3419 gen 4231 top level 614 path snaps/sspaces_daily_2018-07-07-070001 ID 3518 gen 4260 top level 614 path snaps/sspaces_daily_2018-07-08-070001 ID 3522 gen 4263 top level 614 path snaps/sspaces_daily_2018-07-09-070001 ID 3541 gen 4268 top level 614 path snaps/sspaces_daily_2018-07-10-070001 ID 3545 gen 4274 top level 614 path snaps/sspaces_daily_2018-07-11-070001 ID 3554 gen 4283 top level 614 path snaps/sspaces_daily_2018-07-12-070001 ID 3634 gen 4304 top level 614 path snaps/sspaces_daily_2018-07-13-070001 ID 3638 gen 4307 top level 614 path snaps/sspaces_daily_2018-07-14-070001 ID 3645 gen 4312 top level 614 path snaps/sspaces_daily_2018-07-15-070001 ID 3676 gen 4320 top level 614 path snaps/sspaces_daily_2018-07-16-070001 ID 3695 gen 4326 top level 614 path snaps/sspaces_daily_2018-07-17-070001 ID 3757 gen 4339 top level 614 path snaps/sspaces_daily_2018-07-18-070001 ID 3779 gen 4348 top level 614 path snaps/sspaces_daily_2018-07-19-070001 ID 3780 gen 4351 top level 614 path snaps/sspaces_daily_2018-07-20-070001 ID 3781 gen 4359 top level 614 path snaps/sspaces_daily_2018-07-21-070001 ID 3782 gen 4391 top level 614 path snaps/sspaces_daily_2018-07-22-070001 ID 3783 gen 4392 top level 614 path snaps/sspaces_daily_2018-07-23-070001 ID 3784 gen 4398 top level 614 path snaps/sspaces_daily_2018-07-24-070001 ID 3785 gen 4402 top level 614 path snaps/sspaces_daily_2018-07-25-070001 ID 3786 gen 4410 top level 614 path snaps/sspaces_daily_2018-07-26-070001 ID 3914 gen 4473 top level 614 path snaps/sspaces_daily_2018-07-27-070001 ID 4021 gen 9626 top level 614 path snaps/sspaces_daily_2018-07-28-070002 ID 4047 gen 16427 top level 614 path snaps/sspaces_daily_2018-07-29-070001 ID 4048 gen 16429 top level 614 path snaps/sspaces_daily_2018-07-30-070001 ID 4049 gen 16431 top level 614 path snaps/sspaces_daily_2018-07-31-070001 ID 4050 gen 16437 top level 614 path snaps/sspaces_daily_2018-08-01-070001 ID 4051 gen 16445 top level 614 path snaps/sspaces_daily_2018-08-02-070001 ID 4052 gen 16453 top level 614 path snaps/sspaces_daily_2018-08-03-070001 ID 4053 gen 16461 top level 614 path snaps/sspaces_daily_2018-08-04-070001 ID 4054 gen 16477 top level 614 path snaps/sspaces_daily_2018-08-05-070001 ID 4055 gen 16505 top level 614 path snaps/sspaces_daily_2018-08-06-070001 ID 4056 gen 16508 top level 614 path snaps/sspaces_daily_2018-08-07-070001 ID 4057 gen 16515 top level 614 path snaps/sspaces_daily_2018-08-08-070001 ID 4058 gen 16522 top level 614 path snaps/sspaces_daily_2018-08-09-070001 ID 4059 gen 16525 top level 614 path snaps/sspaces_daily_2018-08-10-070001 ID 4060 gen 16564 top level 614 path snaps/sspaces_daily_2018-08-11-070001 ID 4061 gen 16579 top level 614 path snaps/sspaces_daily_2018-08-12-070002 ID 4062 gen 16580 top level 614 path snaps/sspaces_daily_2018-08-13-070001 Then I can impute just one date or use a wildcard to delete several snapshots at once, for example insert 2018-07-0* or 2018-07* to delete de older 10 or all of last months snapshots.
  6. Yes, if you have existing data on the disk(s) it can take a while to move the data to the subvolume since it's like a disk to disk copy, it can't be moved directly, but there's a way around that, this is what I did to quickly convert a share to a subvolume: Rename current share to a temp name: mv /mnt/disk1/YourShare /mnt/disk1/temp Create a new subvolume with old share name: btrfs sub create /mnt/disk1/YourShare use btrfs COW to do an instant (or almost instant, it can take a few seconds) copy of the data to the new subvolume cp -aT --reflink=always /mnt/disk1/temp /mnt/disk1/YourShare Delete temp folder rm -r /mnt/disk1/temp Done, repeat this for all disks and shares You should also create a folder (or more if there are various shares on each disk) for the snapshots, this can be a regular folder, e.g.: mkdir /mnt/disk1/snaps Then I use the user scripts plugins to create the snapshots, at regular intervals for my always on server, and at first array start for cold storage/backup servers, I use a script like this: #!/bin/bash nd=$(date +%Y-%m-%d-%H%M) for i in {1..28} ; do btrfs sub snap -r /mnt/disk$i/TV /mnt/disk$i/snaps/TV_$nd done beep -f 500 ; beep -f 500 ; beep -f 500 On line 3 specify the correct number of disks where the share is, e.g., for disks 1 to 5 it would be for i in {1..5} ; do and adjust the paths as necessary, beeps I use on my backup servers so I know when the snapshots are done and the server is ready to receive new data.
  7. Yes it is, more irritating even is also sending a pm, please don't do that again @ThePhotraveller, starting your own thread and that alone is what you should do.
  8. Yes, and only if there's parity, not quite clear on your current array config on the OP Is the 4TB disk parity?
  9. Yes, as long as the minimum free space is correctly set for that share and for the cache, but note that a pool with 2 different size devices reports the wrong free space. Also, there's no point in using the cache pool for the initial data migration, you can and should enable turbo write.
  10. You can, main disadvantage is that array devices can't be trimmed, so write performance might decrease over time.
  11. Not AFAIK, I searched for a way to do it a while back and found none.
  12. That could help, as I've used some of these with Windows and this issue never manifested itself, so it might be limited to Linux and FreeBSD, as the same happens when they are used with FreeNAS.
  13. Not yet thankfully, but a heat wave is expected for next week.
  14. AFAIK that used to be the drive used but has been since changed to the ST8000DM004, which appears to perform worse and also also has a very low workload rating, not enough for a monthly parity check. I'd be tempted to buy one with the Archive drive, but it won't interest me if it's the newer model.
  15. Did you open it already? Is the disk an ST8000DM004 ?
  16. @gfjardimwhen possible please fix the PHP warnings generated by the plugin, I know they don't cause a problem per se but it's a pain to find them flooding various syslogs every day making it much more time consuming to go through them, thanks. Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte12h - assumed 'byte12h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 663 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant ID_MODEL - assumed 'ID_MODEL' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 470 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant SERIAL_SHORT - assumed 'SERIAL_SHORT' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 470 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte11h - assumed 'byte11h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 662 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte10h - assumed 'byte10h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 662 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte9h - assumed 'byte9h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 662 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte8h - assumed 'byte8h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 662 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte15h - assumed 'byte15h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 663 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte14h - assumed 'byte14h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 663 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte13h - assumed 'byte13h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 663 Jul 25 21:41:08 Unraid-Plex rc.diskinfo[8419]: PHP Warning: Use of undefined constant byte12h - assumed 'byte12h' (this will throw an Error in a future version of PHP) in /etc/rc.d/rc.diskinfo on line 663
  17. I also saw that but would expect to be difficult to compile it for unRAID, if you're using compression on the whole disk you can check the global compression with this, e.g.: Total (uncompressed) file size: root@Tower9:~# du -sh /mnt/disk4 1.1G /mnt/disk4 Actual used space: root@Tower9:~# df -h /mnt/disk4 Filesystem Size Used Avail Use% Mounted on/dev/md4 466G 83M 465G 1% /mnt/disk4 To get compression ratio divide the outputs, in this case it's around 13:1
  18. Yes, I used those since I knew they would compress a lot.
  19. I did some testing at work, and since there's not an easy way to check the compress size, I used two empty disks on my work server, both have same 512x2MB text files, these as expected are very compressible and you can see the difference: 1.09GB vs 86.3MB used
  20. It is valid, parity is updated during the format.
  21. No script needed, you just do it one time and it stays on, but it will only start compressing new files added to that share after +c is set, if at some point in the future you want to turn it off use chattr -c, but similarly it will only affect new/changed files.
×
×
  • Create New...