Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. Did you read the posts above? Also, it can vary with file size it's transferring, the smaller the average size, the lower the speed.
  2. It does but since it weasn't working... Auto mode isn't implemented yet.
  3. Turbo write won't perform so good when reading and writing to array at the same time, since one of the disks will need to alternate/seek between reads and writes, still 22MB/s seems slow, unbalance uses rsync and rsync is great but not well regarded for speed, try doing a copy with mc or Windows from one disk to another, I would expect speeds around 60MB/s, but like I said it will never get close to turbo writes from outside the array.
  4. Dec 2 17:17:39 unRAID root: Found invalid GPT and valid MBR; converting MBR to GPT format Dec 2 17:17:39 unRAID root: in memory. Dec 2 17:17:39 unRAID root: *************************************************************** Dec 2 17:17:39 unRAID root: Dec 2 17:17:39 unRAID root: Warning: The kernel is still using the old partition table. Dec 2 17:17:39 unRAID root: The new table will be used at the next reboot or after you Dec 2 17:17:39 unRAID root: run partprobe(8) or kpartx(8) Just reboot and it will format.
  5. Seagate won't be doing anything to make the disks reallocate sector on purpose, bad luck I guess.
  6. And you might want to uninstall the plugin for now or it might override the manual change.
  7. You don't need the plugin to enable turbo write, just change the write method to "reconstruct write" on Settings -> Disk Settings
  8. Yes, sorry, you can acknowledged monitored SMART attributes (when there's a yellow warning), since the device has a failing now SMART attribute, that can't be acknowledged, the warning will be always there.
  9. It means the SSD reached the estimated life, it doesn't mean it's failing, it can work for a long time without issues, you can acknowledge the SMART attribute by clicking on the dashboard warning.
  10. Those call traces are because the docker image is again corrupt, but like mentioned this is likely the result of the crashes, not the reason for them, still you'll need to recreate the docker image again. You could try running the server in safe mode for a couple of days and see if the crashes persist.
  11. I don't see nothing of interest in the snipped you posted, you should always post the full diagnostics.
  12. Not a docker, the docker image, there was a call trace when mounting it, it's easy to delete and recreate, see here: https://forums.unraid.net/topic/57181-real-docker-faq/?do=findComment&comment=564309
  13. Likely a consequence of the lockups there appears to be a problem with the docker image, you should delete and recreate, as for the lockups diags are just after rebooting, so not much to see, anything outputted to a monitor when it lockups?
  14. Don't know if this is a bug or a limitation, but even if it's not a bug maybe something can be done to make it better, this happened to a user recently in this thread and I was able to reproduce it. How to reproduce: -start with a 3 device cache pool -unassign cache1 -re-arrange pool, e.g., re-assign cache3 to cache1 slot, up to here all is working fine -change cache slots to 2, now Unraid will indicate cache1 is new and result in an unmountable and damaged cache pool if user starts the array The procedure works as expected if cache slots aren't changed at the same time, i.e., remove one device from a 3 device pool, re-arrange the devices, start the array to balance the pool down to two devices, stop the array and now you can safely change the number of cache slots, but possibly this can be improved, or maybe make it impossible to change the number of slots when removing (or re-arraging) cache devices or it will likely happen to someone else in the future and result in data loss.
  15. Did you check if all the HGST have the same identifier? if yes maybe there's a way to change them on the Ateca controller, if not best bet would be to use a plain HBA, that would pass the disk's serial numbers to Unraid and so not a problem.
  16. This no longer happens on v6.6.5, possibly fixed on earlier releases since v6.6.1, either way can be closed.
  17. btrfs scrub start /mnt/disks/UD_pool_name to get scrub status during or after is done: btrfs scrub status /mnt/disks/UD_pool_name
  18. How can I monitor a btrfs or zfs pool for errors? As some may have noticed the GUI errors column for the cache pool is just for show, at least for now, as the error counter remains at zero even when there are some, I've already asked and hope LT will use the info from btrfs dev stats/zpool status in the near future, but for now, anyone using a btrfs or zfs cache or unassigned redundant pool should regularly monitor it for errors since it's fairly common for a device to drop offline, usually from a cable/connection issue, since there's redundancy the user keeps working without noticing and when the device comes back online on the next reboot it will be out of sync. For btrfs a scrub can usually fix it (though note that any NOCOW shares can't be checked or fixed, and worse than that, if you bring online an out of sync device it can easy corrupt the data on the remaining good devices, since btrfs can read from the out of sync device without knowing it contains out of sync/invalid data), but it's good for the user to know there's a problem as soon as possible so it can be corrected, for zfs the missing device will automatically be synced when it's back online. BTRFS Any btrfs device or pool can be checked for errors read/write with btrfs dev stats command, e.g.: btrfs dev stats /mnt/cache It will output something like this: [/dev/sdd1].write_io_errs 0 [/dev/sdd1].read_io_errs 0 [/dev/sdd1].flush_io_errs 0 [/dev/sdd1].corruption_errs 0 [/dev/sdd1].generation_errs 0 [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 All values should always be zero, and to avoid surprises they can be monitored with a script using Squid's great User Scripts plugin, just create a script with the contents below, adjust path and pool name as needed, and I recommend scheduling it to run hourly, if there are any errors you'll get a system notification on the GUI and/or push/email if so configured. #!/bin/bash if mountpoint -q /mnt/cache; then btrfs dev stats -c /mnt/cache if [[ $? -ne 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i warning -s "ERRORS on cache pool"; fi fi If you get notified you can then check with the dev stats command which device is having issues and take the appropriate steps to fix them, most times when there are read/write errors, especially with SSDs, it's a cable issue, so start by replacing the cables, then and since the stats are for the lifetime of the filesystem, i.e., they don't reset with a reboot, force a reset of the stats with: btrfs dev stats -z /mnt/cache Finally run a scrub, make sure there are no uncorrectable errors and keep working normally, any more issues you'll get a new notification. P.S. you can also monitor a single btrfs device or a non redundant pool, but for those any dropped device is usually quickly apparent. ZFS: For zfs click on the pool and scroll down to the "Scrub Status" section: All values should always be zero, and to avoid surprises they can be monitored with a script using Squid's great User Scripts plugin, @Renegade605created a nice script for that, I recommend scheduling it to run hourly, if there are any errors you'll get a system notification on the GUI and/or push/email if so configured. If you get notified you can then check in the GUI which device is having issues and take the appropriate steps to fix them, most times when there are read/write errors, especially with SSDs, it's a cable issue, so start by replacing the cables, zfs stats clear after an array start/stop or reboot, but if that option is available you can also clear them using the GUI by clicking on "ZPOOL CLEAR" below the pool stats. Then run a scrub, make sure there are no more errors and keep working normally, any more issues you'll get a new notification. P.S. you can also monitor a single zfs device or a non redundant pool, but for those any dropped device is usually quickly apparent. Thanks to @golli53for a script improvement so errors are not reported if the pool is not mounted.
  19. I think the problem is the Areca controller is identifying all 6 HGST disks the same, likely because those disks have a generic logical unit ID with lots of zeros in the end (0x001b4d2000000000), different from the other SAS disks, e.g. the 1TB HP is 0x001b4d2029840ae9, so the first one is recognized by Unraid but since the other ones have the same ID they won't work, Unraid needs a different ID for each disk, maybe it possible to change that in the Areca config utility/bios, though your best bet would be to use an HBA instead of RAID controller.
  20. For some reason there's a problem getting the ID of all but of the HUS723030ALS640 disks, disconnect those 5 disks to confirm if the other will show up then. Nov 26 18:28:21 Tower emhttpd: device /dev/sdd problem getting id Nov 26 18:28:21 Tower emhttpd: device /dev/sdb problem getting id Nov 26 18:28:21 Tower emhttpd: device /dev/sdc problem getting id Nov 26 18:28:22 Tower emhttpd: device /dev/sdh problem getting id Nov 26 18:28:22 Tower emhttpd: device /dev/sdg problem getting id
  21. Stop the VM service, change the libvirt storage location to /mnt/disk1/system/libvirt/libvirt.img, start service and your VM should be back. You can then delete the libvirt.img on cache and move the other one there with the mover (with the VM service stopped and will need to adjust path in the end).
  22. You likely have duplicate VM and docker images from when you worked on them without cache, post the output of: find /mnt -name libvirt.img
  23. You can create multiple pools with UD, with some limitations, details here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=462135
  24. Yes, I had the same issue recently, on a couple of different flash drives.
×
×
  • Create New...