reftek

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by reftek

  1. Hi!, I am facing a similiar Issue, I Have complete freeze on my previously stable systeme since I added my quadro p1000 I was thinking that my hardware was failing, but since I discovered the freeze almost always occurs while transcoding. I am using Jellyfin when it freezes, sometimes the GUI is completely lagging out and not really functionning. and 75% of time completely frozen, I have to hard reset via my server management interface or the hardware button. seems P1000 related to me.
  2. for the sake of people that could have that same issue and stumbled on this post., the last step I took was to disable the recycle bin pluggin schedule and I did not a single system lockup since.
  3. anyone know if this issue affect LSI 9300-8i and Hard drives? I'm building a new server and I ordered this card without looking into this thread first...
  4. There was indeed a problem with 8TB/10TB seagate ironwolf or ironwolf pro drives some time ago, the disk were randomly marked as failed, could work 2 hours or 6 days before returning failed, it was a firmware setting issue that could be manually fixed. here is the link for the thread with the procedure to fix you drives I have a Fujitsu ds2607 tha is pretty stable, but those can get really hot in a desktop computer case, they are designed with a heatsink that is supposed to be in server cases, real servers cases have an optimal directional airflow that can keep those card cooler. if you think heat is the issue you can always tiewrap some small cpu /case fan to the card to cool it actively. I hope you can fix your issue
  5. Hi, the systeme locks up completely randomly since I updated to 6.11.5 (No GUI, no Ping, no docker access, no share access) my setup was working fine with all the previous versions and nothing changed. the system is compltely frozen each time, I have to hard reset the server, it boots up fine, starts a full disk inpection because it was a dirty shutdown, and then, randomly it will freeze again after a while, soometime couple hours after the rebuild completion occured, sometime almost a week after. after the last time, I configured an external syslog server to catch logs and this was the last thing it logged before freezing this time after almost 7 days of runtime. 2022-12-18 05:00:08 User.Notice 192.168.2.10 Dec 18 05:00:01 Tower Recycle Bin: Scheduled: Files older than 14 days have been removed please help, edit: i just disabled the recycle bin pluggin schedule since its the last entry, but maybe other stuff is in the log syslog that could help pinpoint what is the issue. SyslogCatchAll-2022-12-18.txt
  6. so there was an issue in the PIA Network
  7. I saw it is an issue in PIA-FOSS on GITHUB and that is corrected in the master branch it should work if it tries to pull the token from https://WWW.privateinternetaccess.com/gtoken/generateToken instead of https://privateinternetaccess.com/gtoken/generateToken their connexion script was updated, i dont know how fast can binex intagrate this change to the image. https://github.com/pia-foss/manual-connections/issues/137
  8. i'm no expert on rebuild times, but both your parity drives are 5400 rpms that impact rebuild speed. Have you precleared the new drive? What speeds did you get while preclearing it? 3 or 4 days for a 6TB drive seems quite long to be parity: Model Family: WDC HGST Ultrastar He10 Device Model: WDC WD80EMAZ-00WJTA0 User Capacity: 8,001,563,222,016 bytes [8.00 TB] Rotation Rate: 5400 rpm Model Family: WDC HGST Ultrastar He10 Device Model: WDC WD80EMAZ-00WJTA0 User Capacity: 8,001,563,222,016 bytes [8.00 TB] Rotation Rate: 5400 rpm
  9. Hi , I thought of that first. But then I will lose access to data on the 4 drive for a while while every disks rebuild one at a time. Too many missing disks to start the array?
  10. I put the raid card back in, then new config, start the array, all my drives are mountable and it seems like my data is all there. THANK YOU! now that i'm in a safe state, I just swapped an empty 1TB drive hooked to the motherboard and installed a 6TB drive instead, i'll move all the data from the drives that are on the adaptec raid card to that 6TB drive that is hooked on my motherboard, when all those HDD from that raid cardwill be empty, i will install my Fujitsu D2607-A21 that is in IT mode again, i will then create a new configuration and format those 4 drives to work in HBA mode. that sounds like a good plan Merci! Thanks
  11. the raid card was an Adaptec ASR-5405Z and it has no IT mode. the drives were assigned to JBOD mode. it may have created disk instead of passing them.
  12. Hi guys, i changed a raid controller that was in JBOD to an HBA card, I then created a new configuration with all the drives in the right slots as read in here, i misclicked the Parity is valid checkbox and started the array without the checkmark in the checkbox. after the switchit told me 4 of my drives where in an unsupported partition layout. When I scrolled the page I read the parity check was working, I immediatly stopped the array in panic. I read that I needed to unnasign a single drive start the array to emulate the drive, stop the array assign the drive and let the rebuild goes one disk at a time. so I tried it. I unnasigned a drive. started the array and check the box the parity is valid. the disk didnt emulated, I realised then that the partial parity rebuilt probably screwed the parity disk. so now i may have an invalid parity drive and 4 data drive that are unmountable. so now I feel like I'm screwed, do I have a way to save my data? should I put my old RAID card back in the server and try to rebuild parity first? -4 drives were already hooked to the mother board, parity was one of it. -4 were on the JBOD raid card, 1 pcie ssd for cache i'm panicking, i think I may need to do a new config with the 4 good drives parity + 3 HDD -new config 4 detected drives. -start array -let parity rebuild -install unmountable drives pluggin, -mount drives, -move data from a disk to array -add the disk to the array, and format it, -let parity rebuild, -repeat for 3 next drives. is that a good plan? it will be a long plan. but i want my data back