Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. Not likely, Crucial usually releases a a bootable update, so you should still be able to do it on the server.
  2. It's a problem with the MX500, likely a firmware bug, I have 12 in a pool and keep getting the same warning, 1 pending sectors that disappears after a few minutes, hope they fix it with a firmware update.
  3. To compare the data you'll need to change one of the disk's UUID so they can be mounted at the same time, you can change the old disk UUID with: xfs_admin -U generate /dev/sdX1 Then mount it with the UD plugin and run a compare.
  4. Make sure the content looks correct before rebuilding to the same disk, especially since you canceled the parity check after the unclean shutdown so there may be some sync errors, best way would be to rebuild to a spare disk then compare the data with the old disk using for example rsync in checksum mode.
  5. OK, now it's showing filesystem corruption, you need to run xfs_repair on the emulated disk, start the array in maintenance mode and run: xfs_repair -v /dev/md7
  6. Is the unassigned disk still mounted on UD? It was on your latest diags, you can't have the same UUID mounted twice, if yes unmount and re-start the array, if not post new diags.
  7. Was already taking a look, there are some weird controller errors but seam unrelated. Unsupported partition layout is not a filesystem problem, I find it very strange this would be caused by an unclean shutdown, unless it somehow damaged the mbr, to fix it you'll need to rebuild the disk, to the same disk, or to play it safer to a spare one if available, the disk should mount if you start the array with it emulated.
  8. You don't need to disable the mover, just set that or those shares to cache "only" and the mover won't touch them.
  9. You should also update the firmware, latest is 20.00.07
  10. Try erasing the bios, it's not needed and as an added bonus it will boot much faster: https://lime-technology.com/forums/topic/12114-lsi-controller-fw-updates-irit-modes/?do=findComment&comment=632252
  11. You can't check filesystem on a disk after it was wiped, try to reboot to see if UD picks up the new disk state, though it should't be necessary.
  12. Try using wipefs: first: wipefs -a /dev/sdX1 then: wipefs -a /dev/sdX Replace X with correct identifier, triple check you're using the correct one before running the command.
  13. With 15 array devices devices you'll be fine, connect 8 to the top HBA and 7 to the bottom, the cache devices can be on the onboard ports since they won't be (or shouldn't be) in heavy use during parity checks/disk rebuilds.
  14. Just be aware that the x4 slot goes through the south bridge, i.e., it shares the A-link connection (2000MB/s) with the onboard SATA ports, so if you're using all the 8 ports on the 2nd HBA and also using the all onboard ports the bandwidth will be shared by all 14, so if not using all ports assign as little array devices as possible to the 2nd HBA + onboard controller.
  15. Yes, for all purposes it's a separate filesystem, so although you're doing a move it acts like a copy on the same disk. I know nothing about scripts, I can Google and take this or that that works for me, but the script I use is very crude, it works for me because I know its limitations, e.g., it assumes you already have two existing snapshots on the first run or it won't work correctly when deleting the older snapshot, still I don't mind posting it so it might give some ideas and good laugh for the forum member who do know how to make scripts. I use the User Scripts plugin with two very similar scripts, one that runs on a schedule daily and takes a snapshot with the VMs running, and a very similar one I run manually usually once a week to take a snapshot with all the VMs shutdown, ideally if there's a problem I would restore to an offline snapshot but the online snapshots give more options in case I need something more recent. #description= #backgroundOnly=true cd /mnt/cache sd=$(echo VMs_Off* | awk '{print $1}') ps=$(echo VMs_Off* | awk '{print $2}') if [ "$ps" == "VMs_Offline_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_Offline_$(date '+%Y%m%d') sync btrfs send -p /mnt/cache/$ps /mnt/cache/VMs_Offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/backup if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "Send/Receive complete" btrfs sub del /mnt/cache/$sd btrfs sub del /mnt/disks/backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "Send/Receive failed" fi fi
  16. Those values should be OK but you need to test and probably don't go above that, at least I found that having a 50% or higher dirty ratio was great while the RAM lasted but slower than normal for larger transfers when the kernel flushed all data data to the disks while still writing the new data resulting in noticeable slower writes, also good to be using an UPS, the more dirty memory you have the more data you'll lose if there's a power cut during writes.
  17. No, and updating to v6.4.1 should prevent it from happening again. It's the allocated amount, btrfs works differently than most file systems, first chunks are allocated, mainly for data and metadata and then those chunks are used, this issue happens on older kernels with SSDs, some unused or little used data chunks are not freed and when the filesystem tries to allocate a new metadata chunk fails resulting in ENOSPC.
  18. The btrfs filesystem on your cache device is fully allocated, see here on how to fix: https://lime-technology.com/forums/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551 When done upgrade to v6.4 since this issue has been fixed on the newer kernels.
  19. If it's using btrfs the free space can sometimes be incorrectly reported and/or be fully allocated resulting in ENOSPC.
  20. The only time I've seen VMs pause is when running out of space, not the vdisk itself, but the device where the vdisk is stored, in doubt post your diags.
×
×
  • Create New...