Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


jordanmw last won the day on June 14 2019

jordanmw had the most liked content!

Community Reputation

43 Good


About jordanmw

  • Rank
    Advanced Member


  • Personal Text
    Corsair 740 Air case- Taichi X399 Threadripper- 1920X- Corsair 64Gb 8x8Gb 2933Mhz- Evga CLC 280 Evga 1Kw gold PSU- Allegro Pro USB 3.0 to U.2 port/PLX bridge- 2x Evga GTX 960 SSC 4GB- 2x Evga 2070 Black- 2x Plextor 512Gb SSD- 2x WD Black 1Tb NVMe-

Recent Profile Visitors

1074 profile views
  1. Having issues getting the latest beta on 7d2d- kinda like what happened with alpha 18. I try to do the -beta latest_experimental game parameters and it doesn't load the new build. I was hoping it was just something simple but even if I wipe the whole thing and redownload it- is just loads the 18.4 version that is in the main branch. Maybe they changed something again and threw your docker off? I can provide logs if needed.
  2. Yup, possible and not terribly hard but I will say that threadripper is certainly what I would recommend. I initially got a 1920 and went to a 2950 after finding that the extra cores are almost a necessity if you want to run anything else- maybe a docker or two. My wife and I built our beast to do our weekly lan party with another couple. They don't have to bring anything over to play, and I don't spend all evening setting up their machines, getting updates, ect. I also have 4 kids, so 4 gamers 1 cpu was a need more than a want- and boy has it come in handy after the lockdown. You can check out my profile for build specs and challenges that I ran into.
  3. You have to boot into gui now- it is not available from command line. if you boot to the gui and bring up a terminal, then use it- the graphic will appear.
  4. Got 2 of the 4 vms back up by grabbing some old configs and img files, grabbing some isos and a few minutes searching the forums when I hit errors. I guess I need to refine my backup/ restore process since this is the second time I've had to perform it. My biggest fault was not noticing one of my backup locations was assigned to the array disk that failed. I guess it screwed me up that there were 3 different locations that had to be empty. Rookie mistake I guess, but I thought I had some ability to recover with a new disk as long as i caught it quickly. Even had a spare ready to drop in, and somehow i still lost a couple days of uptime. It might have been a bigger deal if these were anything but gaming machines and servers. My kids might disagree, but they're lucky to have it at all.
  5. Here are a few of the last diags before the array went offline. I realize now that I am missing my iso share and and whatever else was in my /mnt/user, that means there are probably more things missing than I thought. Any other hopeful methods to recover or rebuild- I do have the xml config backups for my vms also. download_2020-05-21_13-44-17.zip
  6. I do have CA backup and restore- with several of the restore points of the last few weeks- the only thing that I lost is the libvirt.img that somehow got put on the disk that went down- any way to recover that or recreate it easily?
  7. Ok Johnnie, I'll assume that had something to do with it. Here is the questions I have at this point: 1. At what point does a disk failure take all the data on that disk- with it? 2. Should I be preventing parity from syncing if I see errors? 3. Will a reboot or stopping/starting the array make things worse? 4. If I did see the errors- and wanted to MAKE ABSOLUTELY sure that I could recover- what are the best practices? 5. Is there anything else I can do to make sure I have enough redundancy to recover if I do notice errors on one of my disks? Sorry for all the questions, but I really thought I had a hold of this process until my disk failure made everything melt down. I have backups of the really important stuff but really thought it was more resilient to a single disk failure. I guess the last question would be- is there a way to get back to just having errors on one drive- so I can go through the process from the point of those errors showing- or did replacing that drive and attempting a rebuild screw up any chance I had of getting things back to that error state?
  8. I don't think so, I'll check. Sounds like I really need to understand more about how things are being written- what happens if it starts erroring while parity sync is going? Does that totally negate my parity- or just fail the sync? Is there something I am not doing that I should be?
  9. No- it was not disabled until I stopped the array to unassign and replace it.
  10. I guess I must be in that fun category then Jonnie! My disk was showing errors- so I went through the recommendations on the forum/support faqs to replace the device that had errors. Should I have gone through some other process before I replaced the drive- or after? Shouldn't unraid been able to rebuild that drive since parity and all other disks were good? Are we saying that the file system corruption had gone on for so long that the rebuild just rebuilt garbage back onto the drive? What other things should I be doing to assure a failure like this doesn't occur again? Am I really SOL to get that disk data back? Is there some other file system I should consider that makes things more reliable or easier to recover?
  11. I went through all of those suggestions and am pretty bummed out that it errored at every attempt of reading the filesystem and all repair attempts. I guess I expected this to be a non-issue and unraid to handle a single drive failure much better than I am seeing. I would still really like to recover- and just noticed that my backup got overwritten last night! I wasn't holding much on that drive but had some unraid system files there- just not sure which ones were pointed to that disk. The diagnostic that was attached above was from before I lost my backup with last night's overwrite. Anyone have any suggestions on recovering OR rebuilding? I still have my VM images, and system\docker and system\libvirt- not sure what else I'm missing. Can someone help? Also- is there any way to better prepare for a single drive failure- dual parity or some better way to configure my disks to prevent a single disk from taking down my dockers and vms? Shouldn't the contents of the disk have been emulated when the disk failed?
  12. I'm sure there are too many of these posts to count, but I am having this issue after replacing a drive with errors. I replaced disk2 on my array after seeing some errors and the array not starting. I had a replacement drive ready and used the typical process here: https://wiki.unraid.net/Replacing_a_Data_Drive to do so. After the rebuild completed, the drive was not mountable so I thought I could scan and repair the drive with the array started since it is btrfs- but the options are greyed out. I tried from the terminal and it also threw an error. I really didn't have a lot of data on that drive that I care about but I can't even start the array and would really like to recover if possible. Attached diagnostics. Any suggestions to get things back to normal without loss? My parity and other disks never ran into any issues, so I should be able to recover, no? tower-diagnostics-20200519-1940.zip
  13. Hmm, nice build. Having a little RAM envy over here. This is what I have currently: Asrock taichi x399 with Tr2950 (but started with a 1920x.) 64Gb Corsair Dominator 2x WD 1Tb nvme Passed to 2 machines 2x Plexstor 512 Gb Passed to 2 machines 2x Samsung 1Tb for cache array 3x WD 3Tb array data 2x 2070RTX EVGA 2x 960 EVGA SSC Allegro pro 4x USB pcie U.2 port to pcie 4x adapter 4x insignia usb hubs My use case is a bit different- using it for a 4 headed gaming machine. Looks like with what you are planning, you'll have quite a bit of capacity sitting idle. Should be plenty powerful to spin up a couple nice VMs in addition to your docker implementations.
  14. If someone runs into an exposed unraid system- they should contact limetech and give them the license # so they can contact them and advise them on what to do next. Usually there is an email on file with them. Maybe shut it down to prevent others from pwning it.