phbigred

Members
  • Posts

    265
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by phbigred

  1. Any reoccurring notifications in your logs that stand out? I edited late, mind posting what your MB model is and if you are running any overclocks? Might be a previously known or seen problem by the community.
  2. Turn on tips and tweaks troubleshooting mode. Also post diagnostics next time it dies. Are you by chance running plex as a docker and running transcode in memory (/tmp)? Mine was crashing when I had that set so I moved it to a cache location and have been stable since. Also post what plugins and dockers you are running. Also post your MB model as there my be hints for others who have run into your problem.
  3. I stand corrected this is required for initial setup. So my thought is still valid for initial configuration, dual pci-e then remove.
  4. Remember Ryzen doesn't have an on-chip video. You'll need a video card for unraid to boot. You won't need a x370 but a dual pci-e slot B350 with a cheap video card in the first pci-e slot I believe is required to do what you want.
  5. Yup again it's related to the version 6 and known issues. If it is too much work revert back to 5 but understand that the 5.x branch is no longer actively supported.
  6. Also make sure plex is purging deleted files. I've seen libraries very large, and up to 100GB, and have reindexed mine before from scratch. Make sure the settings for purge deleted items is set in plex settings. Otherwise move it to the array on a single disk if that's not an option.
  7. Set appdata share cache to prefer to start out. If it overruns it'll spill onto the array until you get your appdata under control. You must have some apps dumping to it eating the space. Also so check your docker image file space on the dashboard page. Is it full?
  8. Nope the backup is your database with your metadata and viewing/added history. Best bet would be go into your folder and copy it. It may be relatively large (maybe a 100GB) with that much media. So copy to a basic share on your array temporarily. I recommend using docker by linuxIO or the one built by plex. Both are fairly straightforward. Once you set them up there will be a location in your appdata folder with plex that you will move your database to and then start. It should remember everything otherwise you could always rebuild the database. Depends if you want to keep the history and indexing.
  9. Yup or change it under advanced settings under vm manager in the settings tab. Either will work and reboot.
  10. Yeah do what Jonathan mentioned. Recommend a new drive and process laid out above. Worst case you may be able to recover the missing data outside the array.
  11. Parity is showing green which is a good sign. If you want to try you can rebuild your drive, or get a new drive. Before you do either, shut down and reseat cables on disk 1 and on motherboard. I'll have to defer to Turl or someone for a deep dive into diagnostics. Let's hold on steps right now for recovery and verify cabling. Grab one more diag and save it somewhere. Stop the array, change setting auto start array to no in settings under disk settings and shutdown and reseat cabling and turn back on. Then we hold... The disk being green balled on disk 1 makes me hesitant on a rebuild from parity right now.
  12. What do the shares show? My guess is appdata lived on disk 1. What does MC show from putty or unraid boot to GUI? Midnight commander you'd back out to /mnt/disk1. Looks like old school dos shell. Take a peek in there. Who knows your data data may be there but the state it was in when doing the xfs repair might need a reboot to reset the shares back. Report back on what shows in /mnt/disk1 first before going that route. Based on your info will push us one direction or rebuild of disk1.
  13. Go to docker settings and see if you appdata location is valid first. Reason you can't see it likely is the docker.img file was corrupted or not found. Do you only have the 2 drives? If so your parity is actually a raid 1 mirror rather than just a non file system parity calculation. Whatever you do any parity rebuild until you confirm configuration with the community prior to problem. This includes drive counts. I don't want you overwriting parity/mirror data if nothing is seen on your drive.
  14. Depends on the am4 board. Typically the second pci-e 3.0 x16 when populated with a card in the first slot runs at x8 not x4 speed. I was aware he did that in the first slot. I was talking if he was planning to keep the card in the rig. Almost all X370 and some B350 have this option with 2 pci-e cards. I have yet to see a card for gaming require x16 pci-e 3.0 throughput. Hell pci-e 2.0 x16 is just beginning to be a limiting factor. With this in mind pci-e 2.0 runs x16 at a theoretical 16GB/s and pci-e 3.0 runs x16 at a theoretical 32GB/s. So factoring in half the lanes of x8 on pci-e 3.0 is ~16GB/s. Performance difference between the lanes should be negligible.
  15. Create a share and use that as the backup location for a crashplan docker available through the community apps plugin. That would be my recommendation. Just be aware of how much memory your system has as there are some java memory tweaks that can be done. You share out a code with family members and they autobackup to you.
  16. If it's showing up as a uefi check and remove boot security. Had an issue finding this on my asus board too. That will allow legacy style boot to unraid.
  17. I think if you are running a AMD GPU there isn't the bios rom file needed in the first pci-e x16 lane. Good to keep a copy of your ROM file stashed somewhere too. Can come in handy if you part out the box and move it to another rig.
  18. That being said about plex have you looked into dockers? My plex docker performance has been stellar on Ryzen
  19. Yup my Ryzen test bench is doing essentially what you are. Planning to move my prod disks over in about 2 weeks after I beat on my box some more for VM performance tweaking. Just running a single drive (in my case a m.2 nvme but it's listed as a data drive for testing.) Big thing follow what the things like turning off C6 state & immediately update your bios and it should be pretty straightforward.
  20. Big question is are you doing passthrough of GPUs for the VMs? If so that would likely be the only sticking point as Ryzen doesn't have an integrated GPU on the die. You'll need to follow Gridrunner's video on passthrough if using Nvidia in the first slot. Not a huge deal but also look at isolating the VMs from the OS leaving a few cores for the OS. Also my VMs run decently with 8GB of RAM but figure plex needs about 4GB with the OS to run decently. Isolate plex cores separate from the VM using --cpuset-cpus=0,1 at least for plex to isolate. Otherwise Godspeed and good luck!
  21. 1st thing to do is update your bios, most boards are being sent out with early revisions. If you didn't get memory with Samsung "b" you may want to hold off on playing with memory timings until you get back. Stock 2133 is enough to get you started. OC with aftermarket cooler should be straightforward and fast to do. And turn off C6 state....
  22. I've had no problems with stability with it locking up on my 1600X test box with C6 state turned to off. VM passthrough I'm trying to dial in Isolation and cpu pinning in the VM to 6 vCPUs. Threads are paired directly in the system it appears 0->1,2->3 if I'm reading system tools properly. Got the system OC'ed to 4.1 without issue. Is there a concensus from those running VMs to turn on or turn off NPT? Dialed back my vCPU and having a hell of a time getting it to load properly with 4 cores. No difference in performance between the 2. MSI is "enabled" for my GTX1080 and it still runs with a 3Dmark firestrike extreme of about 6100.
  23. Count me in the group that are testing. Brought my R5 1600x online yesterday evening. Will begin playing with passthrough today. Testing firestrike and games for today with Windows, then off to play with KVM. Specs R5 1600x MSI X370 SLI PLUS bios 3.2 NVMe M.2 WD black 256GB cache 1TB data drive 1080 GTX hoping to passthrough.