Kuleinc

Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by Kuleinc

  1. I put it in a x4 slot which is not ideal, and it started working. Maybe that x8 slot is bad? Thank you for your help.
  2. For anyone else with a super micro 846 chassis, the case fans are not enough to cool the Tesla P4 in the x16 slot. I put the blower fan that they shipped me with the card onto the card and it cooled right off. I was getting thermal throttling in a 55 degree F environment with just case fans. Would probably work better without fan in a 2U case... I cannot hear the blower fan on the P4 at all over the standard supermicro case fans, so that's nice.
  3. LSi 9220-8i is the old card that stopped working when I put the new card into the system, the 9300-8i is the new card and is working fine. Also to be clear there is an older sas controller built into the motherboard that I am not using, which will likely show up.
  4. This would be for my HDD CACHE, none of the drives attached to the old card show up. I've swapped pcie slots with the cards and no change. I checked in bios and the pcie slot its plugged into is pcie 3 x8 which the card uses. What am I missing? I've ruled out a bad cables between the HBA and the backplane. Also ruled out bad back plane slots. nas-diagnostics-20240216-1603.zip
  5. Interested in trying this. I have a super micro 846 4 U chasis with 5 chasis fans. would I still need a fan on the card itself?
  6. n/m I forced a reboot and am doing a parity check
  7. Not sure what happened but its now restarted with a parity check. I guess I will try mover again after parity finishes.
  8. So, the server won't reboot. Its stuck on unmounting user shares. How to proceed?
  9. The mover appears to be stalled so I am going to reboot and try again. As its clearly stuck.
  10. neither the hdd status leds or the log seem to be moving now, ill give it some more time, but it seems to have stalled again... Jan 19 09:46:58 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/journalfile-1-0000000002.njf Jan 19 09:46:58 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/journalfile-1-0000000002.njfv2 Jan 19 09:46:58 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/datafile-1-0000000003.ndf Jan 19 09:46:58 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/journalfile-1-0000000003.njf Jan 19 09:46:58 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/journalfile-1-0000000003.njfv2 Jan 19 09:46:58 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/datafile-1-0000000004.ndf Jan 19 09:46:59 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/dbengine-tier2/journalfile-1-0000000004.njf Jan 19 09:46:59 NAS move: file: /mnt/hdd_cache/appdata/netdata/cache/.netdata_bash_sleep_timer_fifo It stopped here in the log. It had been flying by previously.
  11. I used the unraid webgui file browser to confirm the appdata folder on the array does indeed NOT have copies of the files... perhaps im looking for old files wrong? I figured it out. I changed the primary storage as well as secondary. I put the primary storage back to the original locations and changed the secondary storage to the array. I THINK its working now... will confirm.
  12. well. The hdds dont show activity, but there are clearly commands going by in the log now. its just move commands and then a corresponding file exists then repeat... reboot and try again? It seems like its moving it to the place it got it from? Jan 19 09:16:21 NAS move: move_object: /mnt/hdd_cache/appdata/Zoneminder/data/events/1/2024-01-19/3/00584-capture.jpg File exists Jan 19 09:16:21 NAS move: file: /mnt/hdd_cache/appdata/Zoneminder/data/events/1/2024-01-19/3/00585-capture.jpg Jan 19 09:16:21 NAS move: move_object: /mnt/hdd_cache/appdata/Zoneminder/data/events/1/2024-01-19/3/00585-capture.jpg File exists Jan 19 09:16:21 NAS move: file: /mnt/hdd_cache/appdata/Zoneminder/data/events/1/2024-01-19/3/00586-capture.jpg Jan 19 09:16:21 NAS move: move_object: /mnt/hdd_cache/appdata/Zoneminder/data/events/1/2024-01-19/3/00586-capture.jpg File exists
  13. Hi, I'm trying to reorganize things better on my server. I tried to use mover to move things off cache shares (I have two) to the array. That way I can remove some failing drives and reorganize other drives into other shares. The mover button is gray and says its running, but when I look inside the main tab there are no reads or writes being done on any disks. I also checked the log and nothing seems to be happening there either. Any suggestions on how to proceed without starting over? just wait more? nas-diagnostics-20240119-0819.zip
  14. I can't start the docker to clear the files either. I even tried deleting the files in he terminal using rm -r * with no luck. full cache is titled hdd_cache nas-diagnostics-20230323-2224.zip
  15. I did. it failed. I replaced it and now am having issues with the new disk. I ran an extended test on it and it completed read failure. Drive was shipped to me in a bag with no shipping materials ugh... I tried the drive on two different ports on the expander card with two different cables and it still fails, just missed the return deadline, gave a bad review. Will replace with iron wolf drives as they have been flawless and are worth the extra money. Thanks for your help.
  16. I am getting an error about my app_data cache drive being read only or not auto mounted or completely full. I'm not sure how to fix this. I think it may think its XFS and I'd like BTRFS which it is formatted as, but I dunno how to correct this problem. nas-diagnostics-20230305-1437.zip
  17. I pulled out the bad drive and put the new one in its place, hoping that works...
  18. Trying to just replace the failing drive now, but it doesn't show up on the main page... Where do I find it or how to get it to show up? nas-diagnostics-20230305-1003.zip
  19. OK the original disc seems to be working for now. Can I split up the appdata share onto the array and just have the zoneminder docker run on the new 3TB drive somehow? Or do the appdata shares need to be on the same disc? I only want to do this for performance reasons and to not put unneeded wear and tear on the array?
  20. The extended smart test completed with read failure on the new drive...
  21. Ok so I got a cable for my other sata6 SAS conector on my add in card and hooked up the replacement hdd to it. I tried running DDrescue and got the attachment. I am currently running an extended self test on the disk, it was shipped to me in a bag, not box, no packing ugh.
  22. is there a way to image the new drive from the old one and replace it?
  23. oh no! I did an extended smart test and it says: "Completed: read failure" I wonder what could have caused that? Is this something I can fix? I just bought a drive to double the capacity... ugh. The SMART log shows no errors... The history shows: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 38767 18069872 # 2 Short offline Completed without error 00% 34871 - # 3 Short offline Completed without error 00% 34806 - # 4 Short offline Aborted by host 80% 31085 -