Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. Looks like all the Marvell controller ports stopped working with that disk connected, Marvell controllers are not recommended for unRAID but if it was working OK with just the other disk you could try swapping one of the smaller disks connected to the Intel controller there and connect the new disk on the Intel port.
  2. Most likely unRAID only checks the number of cache devices, and if >1 considers it's a protected pool, since raid1 is the default pool mode, and doesn't actuality check the btrfs profile in use.
  3. Difficult to say, most likely related to the kernel oops that happened, no idea if that was hardware or software though. Aug 17 03:23:34 Tower kernel: BUG: unable to handle kernel paging request at 00000000024b1dd3 Aug 17 03:23:34 Tower kernel: IP: prefetch_freepointer.isra.11+0x8/0x10 Aug 17 03:23:34 Tower kernel: PGD 8000000458112067 P4D 8000000458112067 PUD 45811e067 PMD 0 Aug 17 03:23:34 Tower kernel: Oops: 0000 [#1] SMP PTI
  4. But there's other issues since a console reboot would work even if a btrfs operation was ongoing, you'll likely need to force it.
  5. Post the output of: btrfs balance status /mnt/cache
  6. Looks like something else was using the disk, but I don't have experience with using ntfs disks with unRAID, try again after rebooting and if the same happens post on the UD thread.
  7. There was a crash, but strange that would cause a btrfs operation running, especially since you don't even have a pool, try typing reboot on the console.
  8. read/write mount failed, so it was then mounted read only: Aug 15 16:56:46 HomeServer unassigned.devices: Mount drive command: /sbin/mount -t ntfs -o auto,async,noatime,nodiratime,nodev,nosuid,umask=000 '/dev/sde2' '/mnt/disks/Kevin' Aug 15 16:56:46 HomeServer unassigned.devices: Mount failed with error: ntfs-3g-mount: mount failed: Device or resource busy Aug 15 16:56:46 HomeServer unassigned.devices: Mounting ntfs drive read only. Aug 15 16:56:46 HomeServer unassigned.devices: Mount drive ro command: /sbin/mount -t ntfs -ro auto,async,noatime,nodiratime,nodev,nosuid,umask=000 '/dev/sde2' '/mnt/disks/Kevin'
  9. It's normal for the health report to fail during a rebuild/sync, it will turn healthy once it finishes.
  10. You should be able to write to any unassigned ntfs device, it would be better to post on the UD thread and make sure you post your diagnostics.
  11. It can still cause issues, since any data in RAM will not be snapshoted, though I did try several times to boot of a live VM snapshot and it always worked, the same way Windows will most times recover from a crash/power loss, but there could be issues, what I do is a live snapshot everyday of all VMs and try to to at least once a week turn them all off and do an offline snapshot, this way I have more options if I need to recover.
  12. These can also be useful while there's no GUI support to list and delete older snapshots, also with user scripts plugin. List existing snapshots for a share, I use just one disk, since all the other disks containing that share will have the same ones: #!/bin/bash btrfs sub list /mnt/disk1 Delete snapshots: #!/bin/bash #argumentDescription=Enter in the arguments #argumentDefault=Date for i in {1..28} ; do btrfs sub del /mnt/disk$i/snaps/TV_$1 done For example list will produce this: Script location: /tmp/user.scripts/tmpScripts/list sspaces snapshots/script Note that closing this window will abort the execution of this script ID 613 gen 16585 top level 5 path sspaces ID 614 gen 16580 top level 5 path snaps ID 3381 gen 4193 top level 614 path snaps/sspaces_daily_2018-07-01-070001 ID 3382 gen 4200 top level 614 path snaps/sspaces_daily_2018-07-02-070001 ID 3386 gen 4204 top level 614 path snaps/sspaces_daily_2018-07-03-070001 ID 3387 gen 4206 top level 614 path snaps/sspaces_daily_2018-07-04-070001 ID 3391 gen 4213 top level 614 path snaps/sspaces_daily_2018-07-05-070002 ID 3394 gen 4219 top level 614 path snaps/sspaces_daily_2018-07-06-070001 ID 3419 gen 4231 top level 614 path snaps/sspaces_daily_2018-07-07-070001 ID 3518 gen 4260 top level 614 path snaps/sspaces_daily_2018-07-08-070001 ID 3522 gen 4263 top level 614 path snaps/sspaces_daily_2018-07-09-070001 ID 3541 gen 4268 top level 614 path snaps/sspaces_daily_2018-07-10-070001 ID 3545 gen 4274 top level 614 path snaps/sspaces_daily_2018-07-11-070001 ID 3554 gen 4283 top level 614 path snaps/sspaces_daily_2018-07-12-070001 ID 3634 gen 4304 top level 614 path snaps/sspaces_daily_2018-07-13-070001 ID 3638 gen 4307 top level 614 path snaps/sspaces_daily_2018-07-14-070001 ID 3645 gen 4312 top level 614 path snaps/sspaces_daily_2018-07-15-070001 ID 3676 gen 4320 top level 614 path snaps/sspaces_daily_2018-07-16-070001 ID 3695 gen 4326 top level 614 path snaps/sspaces_daily_2018-07-17-070001 ID 3757 gen 4339 top level 614 path snaps/sspaces_daily_2018-07-18-070001 ID 3779 gen 4348 top level 614 path snaps/sspaces_daily_2018-07-19-070001 ID 3780 gen 4351 top level 614 path snaps/sspaces_daily_2018-07-20-070001 ID 3781 gen 4359 top level 614 path snaps/sspaces_daily_2018-07-21-070001 ID 3782 gen 4391 top level 614 path snaps/sspaces_daily_2018-07-22-070001 ID 3783 gen 4392 top level 614 path snaps/sspaces_daily_2018-07-23-070001 ID 3784 gen 4398 top level 614 path snaps/sspaces_daily_2018-07-24-070001 ID 3785 gen 4402 top level 614 path snaps/sspaces_daily_2018-07-25-070001 ID 3786 gen 4410 top level 614 path snaps/sspaces_daily_2018-07-26-070001 ID 3914 gen 4473 top level 614 path snaps/sspaces_daily_2018-07-27-070001 ID 4021 gen 9626 top level 614 path snaps/sspaces_daily_2018-07-28-070002 ID 4047 gen 16427 top level 614 path snaps/sspaces_daily_2018-07-29-070001 ID 4048 gen 16429 top level 614 path snaps/sspaces_daily_2018-07-30-070001 ID 4049 gen 16431 top level 614 path snaps/sspaces_daily_2018-07-31-070001 ID 4050 gen 16437 top level 614 path snaps/sspaces_daily_2018-08-01-070001 ID 4051 gen 16445 top level 614 path snaps/sspaces_daily_2018-08-02-070001 ID 4052 gen 16453 top level 614 path snaps/sspaces_daily_2018-08-03-070001 ID 4053 gen 16461 top level 614 path snaps/sspaces_daily_2018-08-04-070001 ID 4054 gen 16477 top level 614 path snaps/sspaces_daily_2018-08-05-070001 ID 4055 gen 16505 top level 614 path snaps/sspaces_daily_2018-08-06-070001 ID 4056 gen 16508 top level 614 path snaps/sspaces_daily_2018-08-07-070001 ID 4057 gen 16515 top level 614 path snaps/sspaces_daily_2018-08-08-070001 ID 4058 gen 16522 top level 614 path snaps/sspaces_daily_2018-08-09-070001 ID 4059 gen 16525 top level 614 path snaps/sspaces_daily_2018-08-10-070001 ID 4060 gen 16564 top level 614 path snaps/sspaces_daily_2018-08-11-070001 ID 4061 gen 16579 top level 614 path snaps/sspaces_daily_2018-08-12-070002 ID 4062 gen 16580 top level 614 path snaps/sspaces_daily_2018-08-13-070001 Then I can impute just one date or use a wildcard to delete several snapshots at once, for example insert 2018-07-0* or 2018-07* to delete de older 10 or all of last months snapshots.
  13. Yes, if you have existing data on the disk(s) it can take a while to move the data to the subvolume since it's like a disk to disk copy, it can't be moved directly, but there's a way around that, this is what I did to quickly convert a share to a subvolume: Rename current share to a temp name: mv /mnt/disk1/YourShare /mnt/disk1/temp Create a new subvolume with old share name: btrfs sub create /mnt/disk1/YourShare use btrfs COW to do an instant (or almost instant, it can take a few seconds) copy of the data to the new subvolume cp -aT --reflink=always /mnt/disk1/temp /mnt/disk1/YourShare Delete temp folder rm -r /mnt/disk1/temp Done, repeat this for all disks and shares You should also create a folder (or more if there are various shares on each disk) for the snapshots, this can be a regular folder, e.g.: mkdir /mnt/disk1/snaps Then I use the user scripts plugins to create the snapshots, at regular intervals for my always on server, and at first array start for cold storage/backup servers, I use a script like this: #!/bin/bash nd=$(date +%Y-%m-%d-%H%M) for i in {1..28} ; do btrfs sub snap -r /mnt/disk$i/TV /mnt/disk$i/snaps/TV_$nd done beep -f 500 ; beep -f 500 ; beep -f 500 On line 3 specify the correct number of disks where the share is, e.g., for disks 1 to 5 it would be for i in {1..5} ; do and adjust the paths as necessary, beeps I use on my backup servers so I know when the snapshots are done and the server is ready to receive new data.
  14. Yes it is, more irritating even is also sending a pm, please don't do that again @ThePhotraveller, starting your own thread and that alone is what you should do.
  15. Yes, and only if there's parity, not quite clear on your current array config on the OP Is the 4TB disk parity?
  16. Yes, as long as the minimum free space is correctly set for that share and for the cache, but note that a pool with 2 different size devices reports the wrong free space. Also, there's no point in using the cache pool for the initial data migration, you can and should enable turbo write.
  17. You can, main disadvantage is that array devices can't be trimmed, so write performance might decrease over time.
  18. Not AFAIK, I searched for a way to do it a while back and found none.
  19. That could help, as I've used some of these with Windows and this issue never manifested itself, so it might be limited to Linux and FreeBSD, as the same happens when they are used with FreeNAS.
  20. Not yet thankfully, but a heat wave is expected for next week.
  21. AFAIK that used to be the drive used but has been since changed to the ST8000DM004, which appears to perform worse and also also has a very low workload rating, not enough for a monthly parity check. I'd be tempted to buy one with the Archive drive, but it won't interest me if it's the newer model.
×
×
  • Create New...