Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. You just need to connect all 4, and FYI device order isn't important in a pool, as long as all are present it's fine.
  2. Yes, obviously this limit will only be noticed when all disks are used simultaneously.
  3. Yes, the Intel expander can be power bu the PCIe slot or a molex connector:
  4. lol, I quoted from your quote and that's how it appeared, it's an IPS bug
  5. You can use a single link, and you'll have 2200MB/s total usable for the disks on that link.
  6. You can, but pre v6.4.1 it would be more complicated than replacing the pool, but the procedure for doing it is in the FAQ if you want to do it.
  7. With v6.4.1-rc1 you can easily replace them one at a time using the GUI, for any prior release better to use the procedure below: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=511923
  8. This is still happening, just had another post disappear from the unread list.
  9. That's good news and thanks for posting, it might help someone in the future with a similar issue. Sounds good to me, definitely agree on the format, you should be fine on the latest release, and if there's a problem with xfs_repair it should be fixed soon on an upcoming kernel.
  10. If you can still edit the wiki please change the 9207-8i to working ootb.
  11. https://lime-technology.com/wiki/Troubleshooting#Re-enable_the_drive Exactly, so if it fails again there could really be an issue with the disk, SMART is a good indication but an healthy SMART doesn't always equal an healthy disk.
  12. I never did this on v6.4, looks like something changed, will have a look when I have the time.
  13. Without a pre-reboot syslog we can't see what happened, but the disk looks healthy so you'll need to rebuild, either using a new disk or the old disk, since you have dual parity it's not so risky to use the old one, just make sure that the contents of the emulated disk look correct, since whatever's there is what's going to be on the rebuilt disk, also I would recommend swapping/replacing cables/backplane slot just to rule that out in case the same disk fails again.
  14. That was to the OP, already responded to you on your thread.
  15. Please post the complete diagnostics, ideally after the disk was disabled and before rebooting: Tools - > Diagnostics
  16. That's one of the reasons raid controllers are not recommended. You can do a new config, assign all disks as data drives and start the array, you should get one unmontable disk and that will be your parity, if you get more than one grab and post your diagnostics, then do another new config now with parity assigned and check "parity is already valid" before starting the array, finally run a parity check since a few sync errors are expected.
  17. v6.4 includes a newer kernel, so it also likely include some xfs changes, if that's related is difficult to say. That's what we usually recommend, and has been used successfully before, mostly to recover data from accidentally formatted xfs disks.
  18. Though rare I've seen at least 3 or 4 cases where xfs_repair couldn't fix the filesystem, your filesystem looks very damaged, there's metadata corruption and the primary superblock is damaged also, xfs_repair has very few options and it's usually very simple to use, but when it fails there's really nothing a normal user can do, if you can't restore from backups IMO your best option would be the xfs mailing list, and possibly even if they can help you'll end up with a lot of corrupt/lost files.
  19. It's difficult to help without the diagnostics and the output of xfs_repair, but if -L doesn't work your best bet it to ask for help to a xfs maintainer on the xfs mailing list.
  20. Oops, was on the phone and confused the posts, yeah yours should work, palmio's doesn't support trim, if it really is a SAS2008
  21. What HBA are you using? Another report below yours with a 9207 that supports trim also stopped working on v6.4
  22. Change filesystem to reiser or btrfs, start the array, format, change back to xfs and format again.
  23. Also let me add that the 2TB was probably formated with an earlier xfs version, and that's the likely reason it was emptier, xfs started using more space on newer kernels, IIRC a couple of years ago or so.
  24. Don't know, ask an xfs maintainer, it's likely some metadata reserve for filesystem housekeeping, can only tell that it's normal.
×
×
  • Create New...