molesza

Members
  • Posts

    178
  • Joined

  • Last visited

Everything posted by molesza

  1. Swapped out my SSD and all is working. Extended tests did not show any issues with that drive. I swapped the sata cable too, so that may have been it.
  2. I dont remember starting them. Before starting freshI had followed spaceinvaders guide to change my SSD drives to zfs. I changed all shares to move the files from cache to array drive. So maybe because those shares existed when I booted up for the first time docker and VM manager were automatically started? I have disabled them now.
  3. Got to 98% of the copy being done and now file transfer has stopped for about 20 minutes. Hmmm. At a loss
  4. I have my SSDs connected to an LSI hba from 2011. Wonder if that might be the problem. I am going to try and connect all drives to onboard.
  5. So i have been battling with terrible performance in unraid and after trying for hours i decided to just start with a fresh install. I made a backup of my unraid key and reinstalled unraid to the flash drive so I could start fresh. I started unraid up but cancelled the parity check to copy backup of my plex appdata back to the cache drive. I dont have critical files. I formatted my two SSD drives as zfs. 2 seperate pools. One for downloads and one for VMs and docker. I brought up the web terminal and started copying the plex appdata folder to the cache drive. The copying started off quickly copying a few thousand files every 5 seconds or so. Plex is full of small files. Thing is the copying pauses after about 10 seconds completely and I can see high IO wait and CPU usage. Surely copying from HDD to SSD shouldn't be a bottleneck. There is absolutely nothing else installed on this fresh install other than the community applications plugin so that I am ready to install plex after the copy. So I am beginning to think that I have a hardware issue. Does anyone have any suggestions? I will include my diagnostics. Really hope someone can help here. Thanks for taking the time to read. tower-diagnostics-20230627-2106.zip
  6. I swapped out the SSD to troubleshoot and this solved the problem. Not sure what caused it in the first place. I have moved away from btrfs as its only a single drive. Could this have been the issue? I am going to put the old ssd into another machine and check for any issues with the drive itself. With the new ssd in all is good!
  7. Thanks for the reply Squid. These orphaned images happen without me changing a thing. I only update from the docker tab.
  8. I forgot to mention the docker orphans. I will get random dockers that stop working and then I see that the image is orphaned on the docker page. I have to manually add the docker again to get it running. No idea why this is also happening.
  9. I am trying to work out why my system locks up every now and then. When this happens the CPU usage shows 100% on all 4 cores. This will continue for a few minutes and then the system comes back down to normal usage. My system Intel i5-6600k 16GB memory. I am only using 30% of the system memory. I'm pretty sure my i5-6600k is plenty for my use case. Only have a few docker containers running and not running any VMs I have attached my diagnostics. Tried looking through the logs but im not seeing anything. Hope someone can shed some light on this for me. Thanks tower-diagnostics-20230423-1922.zip
  10. Unraid did mention that it would rebuild parity when I did this but it only formatted the drives. Maybe I should run a parity check?
  11. OK all good! I let the array rebuild parity. Then I used Unbalance to move the data off of the drive that still had data on it. After this I stopped the array and set all the disks to "auto" on file system. Started the array and the 3 disks were waiting to be formatted. I went ahead and did this and now the drives are all mounting on xfs automatically. Thank you JorgeB !
  12. Much appreciated! I will do so after parity rebuild and report back.
  13. OK cool. How would I do that with the "dd" command please? Thanks
  14. It came back with a new UUID when i ran the command but it still didnt mount and the blkid command is attached. I have to go to the office now and I'm quite worried about losing data. I am going to mount forcing XFS on those three drives and let parity rebuild. I'm thinking once that is done I will: 1) transfer the data off all those drives 2) set filesystem back to auto 3) stop and start the array 4) format the unmountable drives This should do the trick right? I'm just worried about having no parity and a drive failing. Really Really appreciate your help so far.
  15. Here is the output. Should I be specifying "sdf1" because it doesnt seem to do anything. Thanks
  16. I am not sure unfortunately. Can I create a UUID on those drives now? If not I could transfer off all the data to other drives and reformat? Or when they eventually get replaced that would sort out the problem I guess? It doesnt really bother me, as long as the integrity of the array is OK.
  17. Here are the results of BLKID and my current array which is rebuilding parity. There are four unassigned devices currently preclearing: sdb sdk sdm sdn Thanks
  18. Thanks for your reply! Below is the output. I stopped the array and specified "xfs" for the filesystem on those drives instead of "auto" and the array starts up with all drives mounted! Which is great! However I feel there is an underlying problem that needs sorting. I have removed the ZFS plugin and the "ZFS Companion" plugin and rebooted. No reports of "zfs_member" on drives now.
  19. I temporarily removed all drives from the array whilst stopped so that "unassigned drives" would read them. Shows file systems all over the place. I have also realised that the 2tb drive that has information on it and is unmountable was never part of the ZFS pool. It was one of the original drives in the unraid array to move data from the zfs pool to the array. There were 8 drives total in the ZFS pool. 1vdev of 4x2tb drives and another vdev of 4x4tb drives, all part of the same ZFS pool. The only drives showing up as "zfs_member" are the 4x4tb drives. One of those 4tb drives is the current parity drive so cant possibly be part of a ZFS pool.
  20. I think I have messed up in a big way! Background: I am coming from a freenas install. I installed the zfs plugin to mount my pool in unraid and copy to a temporary array on unraid. This worked just fine. After doing that I added the drives from the zfs pool to the unraid array. They all formatted fine and my array was working fine. I decided to remove some of the drives from the unraid array. Moved all of my data from the drives to other drives. After doing that I stopped the array created a new config and added back the drives I wanted to keep in the array making sure to put parity in correct slot. Once I started the array the parity build started but 3 of my drives reported back as unmountable and needed to be formatted. 1 of the 3 drives has over 1tb of data on it that I dont want to lose. I am starting to think that I should have destroyed the zfs pool somehow and that the unraid array is sitting on top of this zfs pool? But surely when I added the entire pool to the unraid array and formatted that should have killed it. My other thought is that the zfs pool plugin is still running and may be causing an issue? If I start a new config a couple of the drives show as "zfs_member" but not all the 8 drives that were part of the pool. Should I remove the zfs plugin and reboot? I dont want to do anything until someone can possibly point me in the wrong direction. Want to minimize the risk of losing family photos. Thanks chaps The 3 drives that were part of the zfs pool showing as unmountable If I stop the array and remove the drives from the array so that they show up in unnassigned I see the following. No file system and cant mount. The partitions are there though. unraid-diagnostics-20210813-0717.zip
  21. Thanks for that. I am currently running another preclear but will try that after and post my results.
  22. https://imgur.com/a/1llZ82M I swapped out this drive for another drive because I was getting errors. Then I thought I would run a preclear to see what happens. The preclear finished successfully. Is this hard drive now safe to use? It has cleared all the pending sectors. Thanks
  23. Fixed : 1) Stopped all containers. 2) "Settings>Docker" 3) Delete Imaged 4) Enable Docker This reinstalls the docker image. You then have to reinstall all your containers but the templates are there and you wont lose any settings so your containers will work as before.
  24. I dont have pfsense running at all. Figured wasnt a good idea to have it running on my unraid server.