MxFox

Members
  • Posts

    19
  • Joined

  • Last visited

About MxFox

  • Birthday 12/16/1982

Converted

  • Gender
    Male

MxFox's Achievements

Noob

Noob (1/14)

0

Reputation

  1. So, I've gone ahead and updated my BIOS and disabled C-States, but unfortunately, I'm still not having any luck with locating anything related to Resizable BAR. After rebooting and returning to Unraid, my graphics card still isn't showing up. I stumbled upon an error log message stating, "NVRM: GPU 0000:0a:00.0: RmInitAdapter failed!" I did some digging online but couldn't find much, except for a couple of folks mentioning issues with the newer Nvidia driver versions. Feeling a bit stuck, I decided to take a step back and downgrade to version 545.29.05 of the driver, and lo and behold, my card is back in action. But 10min later is drop off.. Any thing else i can do ? I came across this article... Diagnostics.zip
  2. Thanks for getting back to me. I've enabled 4G Decoding, but it turns out my BIOS doesn't support Resizable BAR. I apologize for not sharing the logs initially. I figured it would be simpler to extract what I thought was relevant for you all. I have now uploaded the new logs. Just to clarify, nothing has changed on my server recently. So, having to tweak BIOS settings to get it working doesn't quite add up for me, considering it's been running smoothly like this for years. Perhaps someone else has encountered this issue before. After enabling 4G decoding, I'm not getting any display on my monitor. Unraid still boots up fine, but I can't see anything on the screen Please also see the slot I have the GPU plugged into.. nvidia-bug-report.log.gz
  3. Hello, I'm looking for some assistance with an ongoing issue that's been giving me a bit of trouble. My GPU, a 1060, seems to have disappeared from view. It's been chugging along fine for quite some time, particularly serving its purpose for transcoding on Plex. Initially, I suspected a GPU hardware fault, possibly indicating the need for a replacement. However, I tested by booting into my gaming PC on the same rig (dual boot) and played a solid three-hour Battlefield session without a hitch. This seems to suggest that everything is shipshape on the hardware front. In an effort to troubleshoot, I've recently updated to the latest version of Unraid, only a few versions behind, and also ensured I'm running the latest release branch of the Nvidia driver to cover all bases. Please see logs.. [ 48.087597] [drm] [nvidia-drm] [GPU ID 0x00000a00] Loading driver [ 48.088477] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:0a:00.0 on minor 0 [ 137.155842] nvidia-uvm: Loaded the UVM driver, major device number 239. [ 137.685108] NVRM: GPU at PCI:0000:0a:00: GPU-24fbbf6a-793a-da81-f287-80f2835cfcc5 [ 137.685126] NVRM: Xid (PCI:0000:0a:00): 79, pid='<unknown>', name=<unknown>, GPU has fallen off the bus. [ 137.685139] NVRM: GPU 0000:0a:00.0: GPU has fallen off the bus. [ 137.714976] NVRM: GPU 0000:0a:00.0: request_irq() failed (-22) [ 137.715000] NVRM: GPU 0000:0a:00.0: request_irq() failed (-22) [ 152.103589] NVRM: GPU 0000:0a:00.0: RmInitAdapter failed! (0x22:0x56:762) [ 152.103641] NVRM: GPU 0000:0a:00.0: rm_init_adapter failed, device minor number 0 [ 152.111316] NVRM: GPU 0000:0a:00.0: RmInitAdapter failed! (0x22:0x56:762) [ 152.111361] NVRM: GPU 0000:0a:00.0: rm_init_adapter failed, device minor number 0 [ 153.059424] NVRM: GPU 0000:0a:00.0: RmInitAdapter failed! (0x22:0x56:762) [ 153.059469] NVRM: GPU 0000:0a:00.0: rm_init_adapter failed, device minor number 0 [ 153.063811] NVRM: GPU 0000:0a:00.0: RmInitAdapter failed! (0x22:0x56:762) [ 153.063842] NVRM: GPU 0000:0a:00.0: rm_init_adapter failed, device minor number 0 *** /proc/driver/nvidia/./gpus/0000:0a:00.0/information *** ls: -r--r--r-- 1 root root 0 2024-04-01 15:16:20.086794978 +1000 /proc/driver/nvidia/./gpus/0000:0a:00.0/information Model: NVIDIA GeForce GTX 1060 6GB IRQ: 114 GPU UUID: GPU-24fbbf6a-793a-da81-f287-80f2835cfcc5 Video BIOS: ??.??.??.??.?? Bus Type: PCIe DMA Size: 47 bits DMA Mask: 0x7fffffffffff Bus Location: 0000:0a:00.0 Device Minor: 0 GPU Excluded: No *** /proc/driver/nvidia/./gpus/0000:0a:00.0/unbindLock does not exist Any suggested would be great..
  4. Hi Guys, Iam also having issue renewing certs. Any Ideas. ? <-------------------------------------------------> <-------------------------------------------------> cronjob running on Sun Mar 31 02:08:00 AEST 2024 Running certbot renew Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/..org.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Hook 'pre-hook' reported error code 111 Hook 'pre-hook' ran with error output: s6-svc: fatal: unable to control /var/run/s6/services/nginx: No such file or directory Renewing an existing certificate for ..org and 2 more domains Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems: Domain: Type: connection Detail: : Fetching http://..org/.well-known/acme-challenge/0h-0uQ00FcRsGmHcAMCVe94XaXZ50uQukjriA8qpPNo: Timeout during connect (likely firewall problem) Domain: ..org Type: connection Detail: : Fetching http://..org/.well-known/acme-challenge/FSkXlkFClj1ROJ95T_ZpVt1kOzMnXDgZcYk0fNia3Q0: Timeout during connect (likely firewall problem) Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet. Failed to renew certificate ..org with error: Some challenges have failed. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - All renewals failed. The following certificates could not be renewed: /etc/letsencrypt/live/..org/fullchain.pem (failure) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1 renew failure(s), 0 parse failure(s) Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
  5. I discovered a way to successfully set up Nextcloud backup and running. First, I decided to shift Unraid away from port 443. Once that was done, I restarted Nextcloud, and to my relief, it sprung back to life. This approach served as a temporary solution for restoring the functionality back.. Over the weekend, I going to explore more options and experiment with different fixes. If I manage to find another solution that works, I'll definitely share it with all of you.
  6. damn was hoping not.. thank you for your replies...
  7. Sorry missed understood, but yes i do have notification set and i do get mails when there are issues and im normally on top of them 100%.. but with regards to this issue, i did not have any issue until the parity check last night, there were no warnings and i was not notified of any issues everything was green and healthy before i went to bed.. But the next morning i saw this what i would like to understand from this, was this issue cause by the rebuild of my new data drive and then the parity check started before the rebuild was finished and has cause a conflict somehow or is my new disk just faulty and why all of a sudden ? thanks...
  8. thanks for the reply, yeah just under a month old... no i dont have any warning only errors now after the parity check was run...
  9. Thanks for the reply, I did run the parity again with correcting checks and it did come back with errors which are the logs attached, must i re-run and unchecked correcting check ?
  10. Hi Guys, Wondering if someone is able to assist me.. I recently upgraded my parity drive from a 2TB to a 4TB with no issues, I pre-clear the drive swapped it over and all has been good for about 2 weeks.. I have since then replaced a 2TB data drive with a 4TB Drive, did the pre-clear with no issue. replaced it following the correct procedure and all was getting rebuilt nicely yesterday.. I have now woken up to parity check errors.. i do have my parity set to run every Sunday at 12pm.. Now i dont know if the rebuild of the drive was completed and now this has caused a conflict of some sort, so i have rebooted the server and did another parity check and unfortunately still getting errors. I have uploaded the logs if anyone is able to point me in the right direction i will really appreciate it.. normally i just swap the drives out when i get these errors as the 2Tb data drives are fairly old but these are new drives not even a month old.. I found this is the syslog.. Dec 30 17:31:56 Tower kernel: md: disk0 read error, sector=7764197376 Dec 30 17:31:56 Tower kernel: md: disk0 read error, sector=7764197384 Dec 30 17:31:56 Tower kernel: md: disk0 read error, sector=7764197392 Dec 30 17:31:56 Tower kernel: md: disk0 read error, sector=7764197400 Dec 30 17:31:56 Tower kernel: md: disk0 read error, sector=7764197408 Dec 30 17:31:56 Tower kernel: md: disk0 read error, sector=7764197416 Thanks in advance.. tower-diagnostics-20181230-1836.zip
  11. MxFox

    NVram

    Hi Guys, Not sure if this has been asked before, Can i run unraid in NVRam instead of a USB stick ? Thanks In Advance
  12. Hi guys, Thanks for everyone's advice, input and just general trying to help... So i have been able to test a bit further this evening and have found that DISK1 is completely dead if i swap disk 1 for a new disk the array picks it up instantly.. So am i safe to assume i have lost all my data on DISK1? is there is nothing i can do to make unraid rebuild the new disk, is there no hacks where i can in the config file remove disk 5 so it looks like i only had one disk fail and hopefully get unraid to rebuild DISK1? thanks in advance
  13. Hi johnnie.black and bjp999, thanks for you reply, @johnnie Blask. its a hardware array so i would assume if it was cabling i would lose all 4 disks as it one cable into the motherboard but will double check everything is nice and tight. @bjp999, thanks for check the rest of the disk, they are all very full about 100GB free on each one hence me trying to add another disk to the array, the disk that are currently there are also very old .. So im assuming that my DISK1 has failed it's broken i'm not getting it back, is there a way to remove DISK5 as it is new and i did not have much data on it ? i can try make unraid think that there was only one disk failure and then i can add the new drive to disk1 so it can get rebuilt ? Other than that what would i need to do to bring this array on line again ? Thanks..
  14. thank you for the reply, i only lost one drive but unfortunately since i added the new drive a week ago and that had issue i went to swap it out and now while pre-clearing the new drive again i lost an old drive i think its just bad timing, i have checked all my cabling and the controller and all seems in order, thanks for the advice