Jump to content

JorgeB

Moderators
  • Posts

    67,662
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. For some capacities they were replaced with RED plus, they are still CMR, e.g. WD30EFRX is the same as WD30EFZX Perfectly fine. Those should all be CMR.
  2. I have a script taking snapshots of my VMs everyday during the night, then send them incrementally to another pool, I have like a month worth if needed, there's some info on how to do this in the VM FAQ thread. You can remove both devices from the server, or leave the other one, just unassign both and you can start the server normally. Nope.
  3. Amazon.es and Amazon.it have a good very price for these: https://www.amazon.es/gp/product/B00RQYHL6O/ref=ox_sc_act_title_1?smid=A1AT7YVPFBWXBL&psc=1 They are usually around 300€, which is the current price on Amazon.fr or Amazon.de for example. EDIT: Price went back to normal on amazon.es, around 300€, not so low but still a lower normal 200€ currently on amazon.it
  4. Forgot to mention, if you do this make sure the bad SSD is physically disconnected from the server before mounting the clone with the other pool member, and for Unraid to accept the new pool config you need to do this: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, assign the clone with the other pool member (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.
  5. I'm sorry but you were using raid0 and didn't have any backups? Even using redundant storage you should still have backups of anything important. You can try cloning the bad SSD with ddrescue, it's not optimized for flash based devices, but it generally works, then mount together with the other one.
  6. That wasn't a disk problem, it was a filesystem problem.
  7. Strange about disk6, don't see how there could be a duplicate filesystem and now there isn't, but if all is working now...
  8. You still need to run xfs_repair on disk5, also:
  9. Yes, start with a config as simple as possible then if all works add the other ones one by one, also don't forget that unlike for example Windows you can't have more than one NIC in the same subnet.
  10. Only if you had previously created checksums for all files, or have the same files in for example a backup server, or were using btrfs.
  11. Try using only one NIC, leave the other ones disable/down.
  12. Normal syslog stats over after every reboot, so not much to see.
  13. Parity disk appears to be failing, and because of the read errors during the rebuild there will likely be some corruption on the rebuilt disk, unless by luck there was no data in those sectors.
  14. This means the drive is dropping offline, replace or swap cables, both SATA and power, or connect the SSD to a different controller.
  15. By the description it looks more like a hardware problem, you can try enabling syslog mirror to flash, then post that log after a crash to see if it catches anything,
  16. Tools -> New config will allow you to assign a different disk, data will be kept as long as it was using the standard Unraid partition.
  17. There's not, but there's nf_nat which is usually related, you can also enable syslog mirror to flash and post that log after a crash, it might show more call traces.
  18. We strongly recommend not using USB for array or cache devices.
  19. That's likely a board/BIOS issue, have you tried contacting Gigabyte support?
  20. eth0 is being renamed, reset network config by deleting/renaming network.cfg and network-rules.cfg on the flash drive then start by trying with just eth0 without any bonding.
  21. I don't understand the question, maybe post a screenshot of current array and then explain again what you want to do.
×
×
  • Create New...