MacModMachine

Members
  • Posts

    23
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MacModMachine's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. literally just fought this battle , a firmware update did the trick for me. also forcing write cache on using : smartctl -s wcache-sct,on,p /dev/sdX
  2. Hi, Tom i have been using unRAID for 13 years now. I love it. however this issue did come up for me when using 6.** specifically. Opened a ticket with support and it was closed as a hardware issue which i accepted. So in getting that news i installed truenas ( some years ago). it still runs today with the same hardware without issue. Money not being much of an issue for me as i use unRAID for my photography business. I invested another 5k in 2 systems. All brand new hardware , including SAS/SATA controllers / Cables. This error hit me after about 3 months of using it disabling 2 of my disks overnight. The second system lasted for about 5 months hitting 1 failed disk overnight. Replacing the disks on both systems , This time failures on month 4 and 6 with different disks plugged into different cables. Then i proceeded to installed truenas on both systems , they have been running for a year and a bit. I built a new unraid system at this time with new hardware. The problem cropped up around month 8 this time , using NAS rated WD RED's. Im at a loss at this point , very torn. i love unRAID. but i cannot trust either hardware in general or unRAID.....im just not sure which one it is right now.
  3. I used this with success on my P4's https://www.lxg2016.com/55875.html
  4. you can use vGPU , its a complicated process and very hacky. i have 2 p4's split up myself.
  5. I think i found a possibly major defect. my diagnostics are running the beta , however this happens also in 6.8.3. Lets start off with the issue. after 7-8 days roughly. the disks will fall out of the array. ( i know you will say its the cables , the cards ,hdd ect. however its not). this usually can be fixed by removing and readding the disk to the array(letting it rebuid). however when i do that. it shows the disk as empty now....all data is gone. i have tried this on several different combinations of motherboard/cpu , controllers...all amd systems. pulling out the motherboard and cpu and swapping with a 7th gen intel cpu works fine, i left it running for a few weeks with no issues. i have made another unraid flash drive , started from scratch. issue still shows up. tried this hardware in many combinations : B450F motherboard A320I-K motherboard AM4 3400G AM4 3600 Hitatchi 3tb disks brand new 8tb wd disks brand new 8th seagate compute disks timetec memory , gskill memory in varios sizes and even new packages LSI 2008 based controller PERC H310 flashed to IT mode H200 HP flashed to IT mode motherboard sata connectors Brand new 800w corsair power supply. the logs show like the disk is failing , however if i swap the disk out. with a brand new one....another will show failed until this happens to all the disks. then the brand new ones will start doing the same. im completely lost at this point....i have had to revert to my intel machine running my backup array. all the parts listed above except the 3tb and controllers are brand new. those parts when swapped into the intel system have no issues. all of the HBA's are cooled , nothing in the system builds have anything running hot. all within a 40-50c max operating temp. hdd's are running at a 35C average. i have tried different pcie slots, controller combinations , reflashing controllers. they work in the intel system. i have been running unraid for 10+ years , this is the first time i have had an issue. fileserver-diagnostics-20200623-1206.zip
  6. crappp....ii think your right....i must have f'd up somewhere....the disk that failed was no doubt on that controller....ill have to hate myself for the rest of the day at minimum.... thanks...seriously...thanks. ill remove that controller and burn it with a torch.
  7. That was added after , the disks in question are not on it. This problem started before that saslp was added. I can take it out it will make no difference with this problem.
  8. this is what i can see possibly being the issue, however not much information is given : May 26 17:01:19 fileserver kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 May 26 17:01:19 fileserver kernel: sd 10:0:1:0: [sdd] tag#504 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 May 26 17:01:19 fileserver kernel: sd 10:0:1:0: [sdd] tag#504 CDB: opcode=0x8a 8a 00 00 00 00 00 74 81 02 60 00 00 00 08 00 00 May 26 17:01:19 fileserver kernel: print_req_error: I/O error, dev sdd, sector 1954611808
  9. Had another drop out , grabbed the logs this time. fileserver-diagnostics-20200526-2119.zip
  10. I forced the PCIE down to V2 from auto to see if that could possibly be the issue , since i have a H310 in this now. I have tried several brand new 9211's though.
  11. thats my bad , i forgot to grab the diag before reboot. ill wait for it to show again and grab it again. by booted , i mean they show a red X , however the disk remains fine. must be removed and readded to array. they are showing cannot write sector in the logs. however the disk is fine. scanned it several times with spinwrite.
  12. Here is a doozy , been using unraid for 11 years now , never had to ask a question.....well today is that day. Unraid 6.8.3 9 disks in array. Ryzen 3400G + 64GB Ram + LSI2008 controller. Disks getting booted from array overnight. Replaced Controller Replaced cabling + power supply (Data+power) Replaced ram Replaced motherboard + cpu (ryzen 3600 now) replaced every disk with brand new sealed disks ran memory test for 7 days with no issues. ran Freenas on same system with ZFS pool's and no issues for 14 days. still booting disks....in unraid. grateful for any help as im going slightly crazy.....posting diag fileserver-diagnostics-20200524-0740.zip
  13. how can i get my drives easily converted to btrfs ? one at a time ? is it worth me to do for the scrubbing ? thanks!
  14. after some serious testing here are my findings, used 2 new thumb drives , still error. tested my 4gb stick , it had memory errors. used another 4gb stick and it booted. no matter what , safe mode or not reg unraid or xen unraid it will not boot with 1gb ram. i also used another core 2 system and tested it with 1gb ram with the same error, but booted fine with 2gb. problem solved , unraid needs more than 1gb ram with beta 6 and on.
  15. I see you're using 1gb of RAM. If you're booting with the Xen modules then you might be running out of available memory. From the "splash screen" you might try booting in safe mode without Xen. i just tried this, made no difference , i even put in a 4gb stick just to be sure. thanks