N385

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by N385

  1. The reason I asked is because something else is going on and I don't know what. I decided to remove Parity one and Disk one and put in two replacements. I moved both of the old disks into another Unraid system, they both showed up as unassigned, and they are both preclearing. Neither of those disks are actually bad and someday they'll probably wind their way back into the array to replace another disk or two that aren't actually bad. *I'm thinking of replacing my HBA...
  2. Unraid Version 6.12.3 My Daily driver Windows 10 is an Unraid VM. I stepped away from the computer and when I came back I noticed windows was at the login screen. Not thinking much of it I logged back in and noticed Plex wasn't working so I stopped the plex docker and it wouldn't restart. This is rare but it's happened about a year ago and was fixed by a reboot of Unraid. I reboot Unraid and when once it started I noticed a parity disk and a raid disk were both in error and disabled. Thinking it was just a hickup, I reboot the server again and they both came back in error again so I stopped the array, clicked 'no device' for my first data disk and it showed up in unassigned devices so I mounted it and found all the files and the disk working otherwise as normal. After re-adding it and restarting the array, both still came back in error. A similar situation has happened before a couple years ago (not the same disks) and was remedied by using 'New Config' and rebuilding the parity. Should I do that again or is there an easier fix? My last parity check was two weeks ago. All my important array disks are backed up so I'm not too concerned with losing data. Sorry I didn't grab a diagnostic before a first reboot, but here is my current one along with a snap of my system and the Smart report on both drives. (Data disk 1)- smart-20230719-0148.zip -diagnostics-20230719-0110.zip (Parity disk 1)-smart-20230719-0151.zip
  3. My second day of forum surfing, and I found this: https://learn.microsoft.com/en-us/windows/deployment/mbr-to-gpt That fixed my problem, all booted and ready to go. Don't know why my windows install media that I DLd the other day didn't format that way in the first place....
  4. Before I started this process I cloned my VMs NVME ssd to a SATA ssd. If I change the VM settings to the bound SATA ports, I can actually boot into that, so I know my VM settings work, I just don't know why they aren't working for NVME??? Any ideas anyone? PLEASE HELP, I've got a lot of work piling up. PS I thought maybe something went wrong when I loaded windows on the NVME, so I re-re-loaded windows on it and it still won't boot
  5. I had a Windows 10 VM running on a 1TB PNY NVME that I've used for about 3 years and it started getting really buggy, so I booted into it and loaded a fresh version of windows 10. After booting back into Unraid, the VM wouldn't start, so I unbound everything from it other than the NVME drive and it still won't work, so I created a new VM and it won't run either. The new VM records are here with a picture from the VNC vm.txt
  6. 88 days that I know of is my record. It could have gone longer but I rarely pay attention before shutting it down. Hardware updates, software updates, the rare Plex error, dust cleanout and the occasional UPS busting power outage all take their toll.
  7. I ran SSD trim manually and it definitely was the cause. I wonder if the plugin went bad or what but It's disabled and hopefully all will run well.
  8. CA Backup runs monthly on the 14th @ 3AM Mover Tuning is there so only files that are older than 60 days will get moved to the array. It looks like I overlooked SSD trim. SSD Trim was set to run daily at 12:00 (I have just disabled it to see if that fixes the problem).
  9. I started a movie this morning at around 11 AM and it quit at 12:02 PM as if on a schedule. I have not reboot yet today so nothing much will work until I do. I did that screen grab at 12:50 PM today. I have 4 user scripts that run once a day. I just downloaded all the logs and viewed them and they show each ran successfully at 4:40 AM one after the other this morning. Other than that Mover runs once a week on Sundays at 2:00 AM and parity check runs the first Sunday of every other month at 3:00 AM. I don't recall having anything scheduled for 12:00 PM User-Script.Logs.zip
  10. This has been going on for over a month. In that time I have switched what cable connects what drive and I have even replaced drives in the pool. I have had the machine open at least 5 times. The SATA connectors on the motherboard go to a virtual machine that does backups, video encoding and a lot more so they are not available. I have purposely started watching a movie close to noon many many times and it always stops at ~12:02 (when the browsers video buffer runs out). Not at 11:58 or 12:04, but always 12:02 PM If this were a mechanical issue it wouldn't happen on an exacting schedule. We run no AC or Heat at this time of year so it's not happening due to temperature either. After 12:02 PM Plex will only show this error, the Unraid docker button will hang, fix common problems will hang and the 'reboot' button will not work either. After a hard reboot the machine will work for until 12:00 PM the next day.
  11. I have a PCIe card installed that has 4 mini SAS connectors with a mini SAS to 4 SATA cable on Each. I've swapped what connectors and even replaced the SSDs to no avail.
  12. I use Unraid mainly for Plex and file storage. At noon my system cache disks seem to go into read only mode (from what I've been able to gather). To get the server working I have to reset it. Half the time the server will not reset from 'reboot' button, so has to be hard reset. Fix common problems will work to 22% and hang. I have a copy of the log. workhorse-syslog-20220915-1914.zip
  13. The windows 10 PC I'm using now runs on an M.2 nvme drive that I'm going to put in the server and pair with an AMD graphics card so I won't have to have two PCs on all the time. I have a second M.2 nvme that will again be paired with the second nvidia for nvenc and can do other tasks relatively undisturbed. If I have to do any work that requires the VMs be shutoff, I'll have to be able to do it with the first video card that will be paired to Plex for transcoding etc. BTW I deleted the driver and reinstalled it and now, finally it's working.
  14. I was using HDMI and replaced it with Display Port. That fixed it with the temporary boot setup, but with now back with everything normal, I've got the horizontal blinking curser. I think I jinxed myself. Where's a gun when you need it? 1 shot in the GPU, 2 in the mb.
  15. Well, new boot disk did the same exact thing. Right after start the monitor shut off. So I changed video cards and nothing changed. One of the things I ordered Monday from Amazon came in with my Amd video card, and that was a Display port cable. I rebooted with display port and boom, it works, right into the sign-in in 2160p. Almost 8 days of anguish and it works. 🙂 I'll edit this post and let you know if I have luck with my regular system. Sorry for all the trouble.
  16. I think I'm going to install Unraid trial on a spare thumb drive, remove all my hard drives & SSDs and see if it boots with the driver... I'll let you know if it works.
  17. To make things simpler I deleted my VM and unbound everything from it and removed the 2nd video card from PCI-3 slot so now I just have the one video card in PCI-1. I have tried each card in PCI-1. Both will boot natively into unraid without the driver, but not with the driver installed. Once boot is complete the monitor shuts off. I boot in Legacy mode.
  18. I wondered myself if it had to do with the video cards so I already shifted them earlier this week and it made no difference. I tried everything I could think of to try and get it to work. I would change one setting, reboot and if it didn't work I'd change it back. I've had a bad week trying to get this to work. One of the things I tried was changing the ACS overide and my system didn't like that. I thought I was going to have to start over from scratch as it wouldn't boot at all. Another thing I boned was using a capital letter in --runtime=nvidia in plex and the docker disappeared. Thank God everything is well backed up and I knew what to do. I'm sorry I don't know what you mean. Did you want me to change something? If so you're going to have to tell me what to type. I've only been using Unraid since April and just started messing around with a linux distro also. I'm a 53y/o lifetime Windows user but I'm learning (slowly).
  19. Here she is... PS Just so you know, at the moment I haven't tied the hardware/driver to plex or anything else. Just installed the driver and GPU Statistics (both installed a few days ago). workhorse-diagnostics-20211007-1518.zip
  20. I got disable_xconfig=true. After reboot & startup sequence, I now get a horizontal blinking cursor instead of the monitor shutting down. One other change I noticed after installing the driver is my startup's last line is: Warning: commands will be executed using /bin/sh and it sits on that for about 60 seconds before boot finishes. If I delete the driver and restart it doesn't do that anymore.
  21. With the first video card I can log into GUI (without the driver), but once the driver is installed I can see the login for about 1/4 second before the output disappears and the monitor shuts off. The second video card is completely VFIO bound to the VM and works flawlessly with handbrake for video work and other tasks (I Remote Desktop into it). I have a third video card on order (AMD) that I intend to bind to a second VM and want to use that daily and get rid of my standalone Win10 system, making Unraid GUI login required if I have to shut off the VMs for any reason.
  22. I have (2) Nvidia GTX 1660 Super Gaming X One is in the first PCIe slot is for unrad/plex while the second card is assigned to a VM My problem is after installing Nvidia Driver, on startup the video card shuts off (no output) I can't log directly into the server (with keyboard/mouse on system) and I occasionally get hardware errors through 'Fix Common Problems'. I uploaded two syslogs. The first is before installing the driver and the second one is after. workhorse-syslog-20211007-0903.zip workhorse-syslog-20211007-0920.zip
  23. This is my setup before issues started. 2 Parity disks with three data disks 2 SSD in main cache array with docker info including Plex metadata 2 SSD in a secondary cache that holds all new plex media until it's 50 days old and then gets moved to data disks 1 NVME U.2 cache drive for temp storage. 1 NVME M.2 drive for VM 2 identical model NVIDIA video cards. First one was working as a Win10 VM for Handbrake conversions and the second was linked to Unraid & Plex I messed up Unraid royally unlinking everything from my VM and changing the ACS override and cannot get it to boot with a monitor into GUI in any way (safe mode or otherwise) and there are lots of errors in the script showing on screen trying to boot up. My question is I have a backup of the Unraid boot disk from 10 days ago (when the system was last running normally). I have copied the zip file to a secondary thumbdrive and can I rewrite the current Unraid linked thumbdrive to get it to boot? ************* The events that led to the problem This morning I set up Unraid with a monitor connected with the 2nd Unraid video card and I couldn't get video boot screen for Unraid. Knowing that the first video card was probably getting the pre boot screen I disconnected it and successfully botted with just the second video card. To fix future issues I wanted to unlink everything to movie the Unraid/plex video card to the first slot and move the 2nd card to re-pair with the VM. I wanted to do this for GUI booting with first video card if needed and to start using the VM with it's hardware and get rid of my old standalone Window 10 system. So I unlinked Unraid and Plex from the first video card and unlinked the second video card from the VM (only leaving the an M.2 linked to it) and all was working well except Unraid wouldn't connect to the internet. I worked all day trying to get it to work and tried a lot of things from Unraid forum searching, to no avail. One of the later things I did was change the ACS override and after that I can't get anything to work. I can no longer get into Unraid (even using gui safe mode) It just gives alot of errors and a firefox screen saying it can't connect. No GUI and no network.