Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. This is the problem: Oct 12 12:57:34 Tower kernel: sd 1:0:0:0: [sdc] Unsupported sector size 520. I believe that in those disks the sector size can be set to 512 / 520 / 528, unRAID only supports 512
  2. Diagnostics may give a clue (tools -> diagnostics)
  3. Best explained here: https://btrfs.wiki.kernel.org/index.php/Gotchas#The_ssd_mount_option well it was edited, this was what it said:
  4. This behavior is currently normal with brtfs, specially on a cache type fs where data is constantly being added and deleted, this should improve once we get on kernel 4.14, until then if you like you can experiment with this, you'll still need to run a balance to bring it down (and use a higher usage value like -dusage=95) but it should maintain the allocated space closer to the used, create this file on the flash: config/extra.cfg and put in there this line: cacheExtra="nossd" Stop and re-start the array. Note: this only works for cache pools but you can still have a single-device "pool" if the number of defined cache slots >= 2.
  5. Also note that if you're using rsync it will resume the copy and only transfer the missing files.
  6. Then your best option is to post on the support thread for that docker.
  7. Deleting and recreating the docker image it's usually the best way to fix dockers.
  8. What capacity is your 960 Evo? Only the 1TB can sustain 1GB/s writes.
  9. This is normal and it affects all PCIe 1.0 and 2.0 links due to 8b/10b encoding, PCIe 3.0 upgrades the encoding to 128b/130b.
  10. Besides the two SSDs I only see attempts to initialize another device, which I expect is one of the 2TB Seagates, it's connected on onboard SATA port #3: Oct 1 00:20:08 NAS kernel: ata3: SATA link down (SStatus 0 SControl 310) Oct 1 00:20:08 NAS kernel: ata3: EH complete Oct 1 00:20:08 NAS kernel: ata3: exception Emask 0x10 SAct 0x0 SErr 0x4000000 action 0xe frozen Oct 1 00:20:08 NAS kernel: ata3: irq_stat 0x00000040, connection status changed Oct 1 00:20:08 NAS kernel: ata3: SError: { DevExch } Oct 1 00:20:08 NAS kernel: ata3: limiting SATA link speed to 1.5 Gbps Oct 1 00:20:08 NAS kernel: ata3: hard resetting link Oct 1 00:20:09 NAS kernel: ata3: SATA link down (SStatus 0 SControl 310) Oct 1 00:20:09 NAS kernel: ata3: EH complete Oct 1 00:20:10 NAS kernel: ata3: exception Emask 0x10 SAct 0x0 SErr 0x4000000 action 0xe frozen Oct 1 00:20:10 NAS kernel: ata3: irq_stat 0x00000040, connection status changed Oct 1 00:20:10 NAS kernel: ata3: SError: { DevExch } Oct 1 00:20:10 NAS kernel: ata3: limiting SATA link speed to 1.5 Gbps Oct 1 00:20:10 NAS kernel: ata3: hard resetting link Oct 1 00:20:11 NAS kernel: ata3: SATA link down (SStatus 0 SControl 310) Oct 1 00:20:11 NAS kernel: ata3: EH complete Oct 1 00:20:12 NAS kernel: ata3: exception Emask 0x10 SAct 0x0 SErr 0x4000000 action 0xe frozen Oct 1 00:20:12 NAS kernel: ata3: irq_stat 0x00000040, connection status changed Oct 1 00:20:12 NAS kernel: ata3: SError: { DevExch } Oct 1 00:20:12 NAS kernel: ata3: limiting SATA link speed to 1.5 Gbps Oct 1 00:20:12 NAS kernel: ata3: hard resetting link Device is never successfully initialized, the 3 remaining onboard SATA ports report link down, you want to try different cables on those Seagates and check if they are detected by the board bios at startup, if they aren't unRAID won't detected them either. As for the SAS disks, I assume they are connected to the LSI controller, the controller is detected by unRAID and the driver loaded but there's no attempt to find any disks, also check its bios if the disks are detected, and if it's a RAID controller (or in RAID mode) you may need to create a JBOD or a RAID0 for each disk.
  11. SSH into the server or use the console and type: mover stop
  12. Never seen it and I would guess no very common, but I did have a couple of situations where checksums were very valuable, and that's why I want them, once when there were read errors on a second disk during a rebuild of a failed disk (this was before dual parity) and another time when a disk red-baled during a disk to disk move, checksums allowed me to quickly find the affect files and replaced them from backups.
  13. A scrub can only fix checksum errors on a redundant btrfs filesystem, e.g. a raid1 cache pool.
  14. Yes, file name will appear on the syslog, any time you want you can also run a scrub to check that everything is OK.
  15. No, btrfs will error if it finds a checksum error, i.e., you won't be able to copy or play that file, you just need to check the log and will find a checksum error.
  16. Correct, since each data disk is a separate filesystem and using the default profile, btrfs will detect a checksum error but won't be able to fix it, that is what backups are for. You could use dup profile for one or more disks, so data would be duplicated and any checksum error fixable, but obviously you'd lose half capacity on those disks.
  17. Did you click the link? Decrease the amount of RAM used for cache and it should help with the OOM errors, it's mostly a kernel problem.
  18. Are you using a 10GbE switch or direct connect? If using a switch you need to change the MTU there also.
  19. These may help if not already in use: 1-Change NIC mtu do 9000 (unRAID server and any other computer with a 10GbE NIC) 2-Go to Settings -> Global Share Settings -> Tunable (enable direct IO): set to Yes
  20. Disk has seen better days, if you determined to use it at least run it through a couple more preclear cycles.
  21. It *may* wok with latest unRAID, but IIRC never seen a post of someone using one. This one works for sure.
  22. Correct, but since most SSDs don't support that it won't work for most users.
×
×
  • Create New...