Jump to content

JorgeB

Moderators
  • Posts

    62,816
  • Joined

  • Last visited

  • Days Won

    666

Everything posted by JorgeB

  1. I'm not saying btrfs doesn't have something to do with this issue, but very much doubt it's only that, I never had this problem and I use btrfs in all my servers, for array disks and cache, both single and at some point an 8 device raid10 pool, never noticed this problem, just now did a test that with: -mover running (25GB from cache to the array) -manual copy on the console using cp of another 25GB from cache to array -copying another 25GB over lan from my desktop to cache disk load average peaked at about 4, this on a dual core pentium G620 and the webGUI was normally usable during all the operations.
  2. AFAIK it doesn't have a temp. sensor, only way with be with an IR temperature gun.
  3. Split levels can be confusing, let me give you 2 examples that should help, this is how 2 of my shares are set: (Share / Folder / Files) ps: share is also a folder, the top level one Movies / Movie Title / Movie and related files: split only the top level directory, all files from any movie will be on the same disk TV / Show Name / Season # / Episodes: split only the top two directory levels, all episodes from the same season will be on the same disk, different seasons of the same show can be on different disks, if you wanted all seasons to be on the same disk you'd need to set split level as the Movies share.
  4. No, any deleted blocks should be freed immediately.
  5. Possible yes, but not likely, usual issues are parity sync errors and/or dropping disks.
  6. 329TB (assembled) !! Man !! Damn! That's my kind of hoarder
  7. You could swapping the M1015s from one server to another, this would allow you to eliminate both the controller and the firmware as potential issues, other than that I'd also try a different power supply if available.
  8. M1015 is using latest p20 firmware, there are no know issues it with (unlike the first p20 firmware 20.00.00.00), but you're virtualizing unRAID, that adds another source of possible compatibility issues. Intel expander is also on the latest firmware. Rebuilding disk dropped offline so there's no SMART report, assuming it's good it could be some issue with the LSI firmware and virtualization, some other virtualization issue/compatibility or a hardware issue, like power supply, controller, etc.
  9. Both the SASLP and the SAS2LP have been the source of many issues for some users with unRAID v6, mainly dropped disks, other users have no issues, but AFAIK nobody ever had any problems with the recommended LSI models, hence why they are now the recommended ones.
  10. Settings -> Network Settings -> Interface Rules Select which NIC you want as eth0 and configure it, you'll need to reboot to apply any interface rules change. You can use the boot GUI mode to configure these if you don't currently have network access.
  11. Fist check on the log was non-correct, so no errors were corrected, 2nd check was correct, so a 3rd check should return 0 errors. These may be related to the SAS2LP, but probably aren't, any unclean shutdowns recently? Having said that Marvell controllers are not currently recommended, LSI is a much better option.
  12. Just time for a quick look, don't see anything about docker issues in the logs, there are some ATA errors resulting in 2 disable disks, but since the OP didn't mention them I assume the disks were removed on purpose.
  13. unRAID uses 64 as starting sector for all 4k aligned partitions, and because of how parity currently works I believe it won't be easy to change.
  14. Never used OMV, but Ubuntu can mount a XFS unRAID data disk.
  15. There are known issues with unRAID and Ryzen, see here for a workaround:
  16. IMO main current advantage of ZFS over btrfs is RAIDZ for pools, RAID5/6 on btrfs is still experimental and not ready for production, but using a ZFS pool would negate unRAID main advantages over FreeNAS, like using full capacity from different sized disks, possibility of adding or removing disks from the array, etc, since unRAID uses each disk as a separate filesystem btrfs is as good option, and don't forget that unlike btrfs, ZFS has no filesystem repair tools, if a disk turns unmountable there's nothing you can do, and although rare it happens, you can see that on the FreeNAS forum.
  17. All files are checksummed automatically, if you want to check if everything is OK you can run a scrub, but btrfs checks all files on read and will error on any checksum error, i.e., you're are watching a movie from your server, if the checksum fails there will be and error during playback or copy.
  18. I think so, I use it on all my servers, have no problem recommending it as long as it a stable and UPS protected server. Yes, and more important than that for me, it allows you to be sure if any files were corrupt when something unexpected happens, e.g., some read errors on another disk during a rebuild, a disk getting disabled during a file copy operation, etc.
  19. Yes, I was surprised as well, very little overhead when compared for example with PCIe.
  20. ata-TOSHIBA_HDWD130_37F65K8AS@ ata-TOSHIBA_MQ01ABD100_Z3AJC14TT@ ata-TOSHIBA_MQ01ABD100_Z3AJC14TT-part1@ ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T1300432@ ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T1300432-part1@ ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0425030@ ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0425030-part1@ ata-WDC_WD30EZRX-00SPEB0_WD-WCC4E1ZH8492@ ata-WDC_WD30EZRX-00SPEB0_WD-WCC4E1ZH8492-part1@ ata-WDC_WD30EZRX-22D8PB0_WD-WCC4N1YP88NY@ ata-WDC_WD30EZRX-22D8PB0_WD-WCC4N1YP88NY-part1@ ata-WDC_WD60EZRX-00MVLB1_WD-WX31D944CS0X@ ata-WDC_WD60EZRX-00MVLB1_WD-WX31D944CS0X-part1@ nvme-TOSHIBA-RD400_@ nvme-eui.e83a9702000018f5@ nvme-eui.e83a9702000018f5-part1@ usb-Kingston_DataTraveler_2.0_64006A7807FAB031D96709AA-0:0@ usb-Kingston_DataTraveler_2.0_64006A7807FAB031D96709AA-0:0-part1@ wwn-0x10501182279695618049x@ wwn-0x10501182279695618049x-part1@ wwn-0x11233951013753212929x@ wwn-0x11233951013753212929x-part1@ wwn-0x12438954472907100161x@ wwn-0x12438954472907100161x-part1@ wwn-0x16616003131295092736x@ wwn-0x3635306224004976640x@ wwn-0x3635306224004976640x-part1@ wwn-0x6330196881211281409x@ wwn-0x6330196881211281409x-part1@ wwn-0x8622904561878847489x@ wwn-0x8622904561878847489x-part1@
×
×
  • Create New...