Jump to content

JorgeB

Moderators
  • Posts

    63,741
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. You can't check filesystem on a disk after it was wiped, try to reboot to see if UD picks up the new disk state, though it should't be necessary.
  2. Try using wipefs: first: wipefs -a /dev/sdX1 then: wipefs -a /dev/sdX Replace X with correct identifier, triple check you're using the correct one before running the command.
  3. With 15 array devices devices you'll be fine, connect 8 to the top HBA and 7 to the bottom, the cache devices can be on the onboard ports since they won't be (or shouldn't be) in heavy use during parity checks/disk rebuilds.
  4. Just be aware that the x4 slot goes through the south bridge, i.e., it shares the A-link connection (2000MB/s) with the onboard SATA ports, so if you're using all the 8 ports on the 2nd HBA and also using the all onboard ports the bandwidth will be shared by all 14, so if not using all ports assign as little array devices as possible to the 2nd HBA + onboard controller.
  5. Yes, for all purposes it's a separate filesystem, so although you're doing a move it acts like a copy on the same disk. I know nothing about scripts, I can Google and take this or that that works for me, but the script I use is very crude, it works for me because I know its limitations, e.g., it assumes you already have two existing snapshots on the first run or it won't work correctly when deleting the older snapshot, still I don't mind posting it so it might give some ideas and good laugh for the forum member who do know how to make scripts. I use the User Scripts plugin with two very similar scripts, one that runs on a schedule daily and takes a snapshot with the VMs running, and a very similar one I run manually usually once a week to take a snapshot with all the VMs shutdown, ideally if there's a problem I would restore to an offline snapshot but the online snapshots give more options in case I need something more recent. #description= #backgroundOnly=true cd /mnt/cache sd=$(echo VMs_Off* | awk '{print $1}') ps=$(echo VMs_Off* | awk '{print $2}') if [ "$ps" == "VMs_Offline_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_Offline_$(date '+%Y%m%d') sync btrfs send -p /mnt/cache/$ps /mnt/cache/VMs_Offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/backup if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "Send/Receive complete" btrfs sub del /mnt/cache/$sd btrfs sub del /mnt/disks/backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "Send/Receive failed" fi fi
  6. Those values should be OK but you need to test and probably don't go above that, at least I found that having a 50% or higher dirty ratio was great while the RAM lasted but slower than normal for larger transfers when the kernel flushed all data data to the disks while still writing the new data resulting in noticeable slower writes, also good to be using an UPS, the more dirty memory you have the more data you'll lose if there's a power cut during writes.
  7. No, and updating to v6.4.1 should prevent it from happening again. It's the allocated amount, btrfs works differently than most file systems, first chunks are allocated, mainly for data and metadata and then those chunks are used, this issue happens on older kernels with SSDs, some unused or little used data chunks are not freed and when the filesystem tries to allocate a new metadata chunk fails resulting in ENOSPC.
  8. The btrfs filesystem on your cache device is fully allocated, see here on how to fix: https://lime-technology.com/forums/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551 When done upgrade to v6.4 since this issue has been fixed on the newer kernels.
  9. If it's using btrfs the free space can sometimes be incorrectly reported and/or be fully allocated resulting in ENOSPC.
  10. The only time I've seen VMs pause is when running out of space, not the vdisk itself, but the device where the vdisk is stored, in doubt post your diags.
  11. No, I understood the array was not starting, so maybe there was filesystem corruption, but the array is started on the diags.
  12. Please post your diagnostics, if you can't get them form the GUI type diagnostics on the console.
  13. and stop the docker service before moving.
  14. On the console type: mover stop
  15. No, docker needs to be disable until all files are on cache again, if you have many small files it would be much faster moving them manually, e.g., with midnight commander (mc on the console), move from /mnt/cache to /mnt/disk#
  16. Tom mentioned something about not being much around the forums this week, back to normal next week.
  17. Once the data is on the array, stop it, and assuming it's a single cache device, click on it and change the cache filesystem to xfs, start the array and you'll have the option to format, you can then leave it as xfs if you don't have plans to create a cache pool or change back to btrfs and format again.
  18. Backup your cache, reformat and restore the data, you can use this procedure to backup/restore.
  19. No, I'm still using the same method. I don't use encryption but don't see a reason why it would cause issues.
  20. Keep in mind that using the default raid1 model only 250GB will be usable, despite the main page showing free space available on the pool.
  21. You can try, but the result will probably be the same, you can also run a filesystem check in case there's corruption.
  22. Check it's using the latest firmware P20.00.07, there were two previous releases and the 1st one especially (p20.00.00) had issues.
  23. Yes, and LT already confirmed that only the trial license (and a few betas releases in the past) communicate with their servers.
×
×
  • Create New...