• Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About WizADSL

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I came into the room where Unraid lives and notices the server was off. It's on a UPS and prior to a graceful reboot I did about 45 days ago had run for 377 days without issue. When I powered it back on it looks like some of the CPUs either crashed/panicked or failed to come up. The earlier diagnostics should show this. Because the web interface had stopped responding I had to force a power off. I restarted the system and it looks like it is back up. Obviously a parity check is in progress. I'm still seeing 3 BTRFS errors in the log (see second diagnostics): May 7 03:55:09 T
  2. Were the drives with the issue originally sold as externals?
  3. I'm seeing this now in 6.8.3 (and for a while now). if I click the close gadget on a notification, the notification disappears for a moment (a second or less) and then reappears. If I refresh the browser after having done that the notification does not return. In my case I use Chrome, perhaps it was a change there?
  4. Unless you have sensitive data to protect I wouldn't encrypt your array. The only person that will end up locked out of your data is you. Encryption comes with risk that typically would be outweighed by the need for security. This doesn't sound like one of those cases.
  5. Maybe we're saying saying different things? If you are running a VM in unraid which is completely separate and isolated from the unraid host then it is the VM's kernel that needs ZFS support. What you are saying would be the as if I said I was going to run a Windows Server 2019 VM in unraid and I wanted to use the ReFS filesystem in that VM then the unraid kernel would need to support it which is unnecessary.
  6. If TRUENAS is running as a VM in UNRAID wouldn't you just want to pass the USB devices through to the VM in the VM configuration? I don't see why you would need ZFS support in the host kernel.
  7. Try: You can try using sfdisk to copy the partition table from one drive to another. I have not tested this personally but it's worth a try although I would make sure you don't have irreplaceable data on the subject disk as I assume this will nuke it. In the command below "sfdisk --dump" will dump the partition table layout of /dev/sdd in a format that the following command (through the pipe) can then write the same layout to /dev/sdv. If you examine the output of the --dump command you can see what will be done and you could proabaly do it "manually" if you prefer. sfdisk -
  8. Ok, so where's Red Dwarf then?
  9. I would recommend disabling any ad blockers when accessing the unraid web interface. I have had problems in the past with this.
  10. Another long shot, have you tried booting EFI?
  11. Based on what was happening with the Sqlite databases, could this have caused corruption in other files on the array?
  12. On 6.8rc1 I am still seeing writes starve reads. I' wondering if these parameters are an issue for me since other have reported that 6.8rc1 has solved the problem for them.
  13. Thank you for that. I'm hoping someone can chime in on the remaining parameters.
  14. is there any guidance available for MD tunables on version 6.8rc1? Does the size or number of discs or system memory affect what the best values would be? In the presence of very high I/O I still have some performance issues and I would like to see if any of the parameters can be adjusted to mitigate that.
  15. Now that we presumably know the source of the corruption do we know if any other type of data would have been affected? Do we know what type of disk activity would have resulted in corruption?