Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. For the future follow the FAQ instructions to remove a cache device, much safer. For now your best bet is probably to try an mount it read only, copy all data and format: mkdir /x mount -o recovery,ro /dev/sdX1 /x Replace X with actual device
  2. You should avoid preclearing solid state devices, I you just want to wipe it use blkdiscard instead: blkdiscard /dev/nvme0n1
  3. It's not required but it's highly recommended, no trim without it.
  4. I use the user scripts plugin.
  5. Devices are tracked by serial number, not controller port, you wont need to do anything.
  6. SAS2008 based controllers don't support trim on most SSDs, you should connect the SSD on the onboard controller, swap with another disk if needed.
  7. That would be disappointing, if that's the case I would only use it for desktop, all my unRAID servers use ECC and I'm not going back to regular RAM. I remember those, can't really say that I was sad to see them go...
  8. They do look very good, about time AMD got their act together, my last desktop with an AMD CPU was the 64 X2, I intend to buy one (for desktop or unRAID) as soon as I can afford it.
  9. It's been a while, much more since I really used DOS, but IIRC you need to add to your config.sys something like: DEVICE=Path\to\HIMEM.SYS DOS=HIGH,UMB Try the remove other DIMMs option first, easier if it works for you.
  10. I had a similar error fixed by booting DOS with himem.sys
  11. http://lime-technology.com/wiki/index.php/Crossflashing_Controllers#LSI_SAS2008_chipset
  12. Looks great! You should add that to the FAQ
  13. No, this would only work to have VMs/Dockers, etc.
  14. It's possible to use a btrfs pool using the unassigned devices plugin, it won't look pretty since it's not officially supported (ie, only one device shows as mounted) but it works with any raid profile: btrfs fi show /mnt/disks/Test Label: none uuid: 75c7d7f5-74e4-4662-b465-c400b7384a6c Total devices 2 FS bytes used 1.75GiB devid 1 size 232.89GiB used 2.03GiB path /dev/sdf1 devid 3 size 232.89GiB used 2.03GiB path /dev/sde1 btrfs fi df /mnt/disks/Test Data, RAID0: total=2.00GiB, used=1.75GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=2.02MiB GlobalReserve, single: total=16.00MiB, used=0.00B ETA: You'll need to manually create the pool first (or use the cache slots).
  15. FYI, since v6.2 clear is done with the array online, but disk is not tested like when using preclear.
  16. That's normal, unRAID only checks for the preclear signature after starting the array.
  17. I believe the problem is related to this: http://lime-technology.com/forum/index.php?topic=52362.msg503483#msg503483 But this was supposed to be fixed on the kernel included with v6.3, I would just convert all disks to xfs to avoid these and the other reiserfs issues.
  18. Was this feature removed? I'm using latest preclear and trying to stop the array during a preclear doesn't work anymore, stuck on sync filesystems... Any idea why this stopped working? It's not just me as I've seen at least a couple of users with the same problem recently.
  19. You can't fit a PCI-E connector on the CPU AUX or vice versa, they are keyed differently, good thinking from someone
  20. I believe rsync also has a sparse flag, users with vdisks on the array should use it.
×
×
  • Create New...