Clay Smith

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Clay Smith

  1. The parity sync finished this morning with 0 errors, no more SMART errors appeared on disk 4 during the process, and I can browse and write to the shares from windows. I enabled the docker service again and my docker containers seem to be up and running now as well. Thank you so much for helping me get this back up and running so quickly.
  2. Sorry, I meant that as a different thought. Once all is said and done (assuming the parity sync goes smoothly) I would like to replace the drive. This drive is an older WD Green drive that was originally supposed to be temporary while I waited for a different drive to be RMA'd after failing a pre-clear. I'll just worry about getting through this first and not get ahead of myself. From the state we're in now should I just power off the machine and leave it until I can check cables tonight? I don't think I quite follow this, sorry. What file are you referring to?
  3. I am able to mount the disk using UD and am able to browse the files from within the unraid webUi. What's the best plan from moving past here. You mentioned doing a new config and re-sync parity. Should I wait to do that until after I've checked the cables? In regards to replacing the drive, should I let parity re-sync, then swap the drive and let it build the new drive from parity?
  4. Should I cancel the current xfs_repair operation to do this or wait until it completes? If starting the array in normal mode gives me access to the (emulated?) disk, would it be worth trying to copy any of my recent writes to another drive before attempting to mount with UD?
  5. I've started it and it's doing the same thing so far Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... followed by it generating a bunch of periods. While it was started in normal mode I was able to browse the shares in windows and noticed that the appdata backups that weren't showing before were now visible and a video file that IIRC was located on Disk 4 was also visible.
  6. Here they are. Disk 4 also now reports 'Unmountable: No file system'. yuki-diagnostics-20191210-1530.zip
  7. My system has a weird quirk where it won't boot if it's not hooked to a monitor and I'm not home to plug one in right now. When I start the array, should I start it normally or in maintenance mode? Is there anything I can do in the mean time before I reboot tonight or should I just hold off and report back?
  8. Just completed with from when I ran it last night with -L Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... ...............(5.7 million dots)...........Sorry, could not find valid secondary superblock Exiting now. That's all it gives. I'd assume this means that it's not fixed. I'm not really sure where to go from here. Based on syslog line from my last post, did it run properly? Or do I need to use the terminal to run a better command?
  9. I used the webgui so I didn't type in a complete command. I clicked 'Disk 4' on the main page to get to the disk settings and then clicked the 'Check' button under the 'Check Filesystem Status' section. In the options box I had left just the -n that was there by default. At the time the syslog was full so I can't grab what it said then but I have since truncated the syslog. When I ran it when I got home last night with the -L this was the line in the syslog Dec 9 20:28:12 Yuki ool www[13176]: /usr/local/emhttp/plugins/dynamix/scripts/xfs_check 'start' '/dev/md4' 'WDC_WD60EZRX-00MVLB1_WD-WXL1H642CJCJ' '-L'
  10. The wiki page said to run a -n first as a test but I suppose if I already know there is a problem then there's no reason to test it for problems. I'm out currently but I suppose when I get back to the server I should tell it to cancel and start it over with -L yes?
  11. Did I misunderstand the page? Should I have run it with -n and -L? Would it be right to cancel it? On the settings page for the disk it still says it's running.
  12. I followed the directions in the link for running the test through the GUI and left the default option of just -n
  13. I followed the instructions in your link and put the array in maintenance mode to run the test but it's just been sitting on the same step for a few hours. Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... This is followed by a line with 3,708,871 periods.
  14. It is still connected. On the 'Main' page it has an X next to it and says 'Device is disabled, contents emulated'. Is it safe to spin up and check SMART while in this state? Under the 'Writes' column it claims it has 18,446,744,073,709,529,088 writes on that drive.
  15. Does this not apply to me? Disk 4 is the drive that failed. It also reads further down that for xfs I will have to start the array in Maintenance mode. I have been hesitant to shutdown or stop the array since disk 5 files were not appearing when browsing shares as I didn't know if something had happened to it as well. Unraid hasn't told me that anything is wrong with it but I wasn't sure with the missing files and all. I don't mean to doubt you, I'll run the test if you say that's what's best, I just want to make sure I lose as little data as possible. On a side note my VMs are located on an SSD mounted by Unassigned Devices and are still working currently. Tailing my syslog shows 5 more shares on the repeating error list, and my syslog is 91% full according to the dashboard.
  16. Last week I was sick and didn't bother even looking at my server for a few days. When I finally got back to it I noticed that disk 4 was disabled with it's contents emulated. It's my oldest disk that I got already heavily used so I didn't think much of it and ordered a new drive for replacement. This morning I threw the drive into my second server to run pre-clear and figured in the mean time I might move around some data. I installed a new disk 3 weeks ago and it was mostly empty so I downloaded unbalance and remembered the last time I worked with it I needed to run the Docker Safe New Perms script, so I did again here. I shut down my Docker containers but then decided against the whole thing and uninstalled unbalance and tried to restart my containers, but a lot of them were shutting back down after trying to start up, and the ones that did start weren't working. I tailed my syslog and got a loop of i/o errors. Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/Clay Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/CommunityApplicationsAppdataBackup Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/Handbrake Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/Literature Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/PlexTranscode Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/Temp Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/Terraria Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/VMC Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/appdata Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/domains Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/isos Dec 8 18:24:47 Yuki emhttpd: error: get_fs_sizes, 6306: Input/output error (5): statfs: /mnt/user/system This exact list of errors repeats every second in my syslog. I also tried installing unbalance a couple times just because I was confused as to what was happening but could never get it to actually run properly. I had to leave for quite some time but I got back and started looking into it again and found this line in my syslog Dec 8 08:42:04 Yuki kernel: XFS (md4): Corruption of in-memory data detected. Shutting down filesystem At one point I was also wondering if the new permissions script messed something up as my containers where giving errors about accessing their config files and looked into restoring the appdata folder from a backup, but the restore appdata tab is only showing a backup from 2019-11-18 even though it was set to run every week. When browsing to the location where the backups are kept from a windows machine, this is the only backup shown. Looking directly at disk 5 in unraid shows I should have backups from 2019-11-25 and 2019-12-02 but the backup that does show up from 11-18 is stored in Disk 1. Digging around, other files that the Unraid gui shows should be on disk 5 don't appear when browsing shares from windows, including files I have in the past few weeks copied from these shares to my windows machine. Attempting to write a folder to any share from windows results in this error code. An unexpected error is keeping you from creating the folder. If you continue to receive this error, you can use the error code to search for help with with problem. Error 0x8007045D: The request could not be performed because of an I/O device error. I've read enough horror stories about people taking steps to fix problems and making it worse so I hope I haven't already gone too far and was wondering what I should do from here? The replacement drive is 15% through step 2 of 5 of the pre-clear. Did the system switch to read-only to protect itself and will it fix itself once I shutdown and replace the bad disk? Is there more I need to do in the meantime to recover? Is it even recoverable? Or do I start trying to copy as much as I can off while the system is still functioning? yuki-diagnostics-20191209-0032.zip
  17. There were a couple of issues in the beginning that I'm not sure if they are still issues (once I changed my settings I never reverted them back to test) like disabling c-states but I ran a 1700x for a year and a 2700x for a little over that and just swapped in a 3900x and stability has been fine. I usually run 2-3 months between reboots without issue, and the reboots are generally for other reasons, not due to issues. That being said I don't run GPU passthrough (Got it to work on the 1700x as a neat thing I could do but since I wasn't using it I took out the gpu to lower my power consumption) and I've read alot about the newer AM4 BIOS' preventing passthrough so I'd hold of on the 3600 or stick with the 2700x This is dependent on your needs. Based on your Question 4 I'd say 32 but if you already have the 16 it's a fine starting point. Once you start trying to use windows as a daily driver and gaming machine though I'd move to 32 so that 16 can go to that VM. As I mentioned above I've used 3 chips across 3 generations of cpu on 2 motherboards (B350 and B450) and haven't had issues beyond the early-adopter things back in the day with 1st gen ryzen. I don't believe I am knowledgeable enough to answer this but I think I remember reading something about unraid having it's own overhead that causes lower than desired write speeds on HDDs in the array so if it's a hard limit and not a % of total write capacity it could be a huge bummer. Could be completely wrong though so don't quote me. It'll work. Lots of things will work though. For now. You'll definitely see a cost benefit over time going with 80+ Gold (or higher) efficiency since you'll be running 24/7 and you'll need a lot higher wattage once you start using the daily driver VM, especially with a high end GPU like the 2080. As given your spec will be adequate for the above until you start trying to use the daily driver VM. I don't know enough about the requirements of pfSense or your Win10 VM listed for "b)" so I can't weigh in on which chip you should use for the upgrade. If you need more cores go for the 2700x. If your core count seems good then the 3600 should perform better in gaming once they can fix the GPU passthrough. I didn't have any issues activating my Win10 VM Hopefully this helps and hopefully others can give you some second opinions and fill in some of the blanks I couldn't.
  18. I got this up and running but when first installing szurubooru-api it was throwing errors about config.yaml being a directory. I noticed in the template you had a path by default labeled as (optional) that created a config.yaml folder in the appdata/szurubooru folder. Deleting this path from your folder allowed the api docker to start properly. Other than that everything worked exactly as you instructed. Thanks for your work.
  19. This worked for me, thanks. Now to poke around and see if Taisun is useful for my environment.
  20. Unfortunately I'm on a B350 matx board with only one full size pci-e slot so I can't move the card to dump my bios. I tried the method for editing the vbios off techpowerup and that is still causing the code 43. I've also attempted jayseejc's tip about adding video=efifb:off without success as well. I'll try swapping in another card this weekend and see if maybe that does the trick. Thanks for the help. P.S. Just to add to the data set I've been running stable for a few months now on 6.3.5 with the board's original bios, c-states off, 2 windows vm, and 8-10 docker containers. The system would crash if any form of linux vm was run for more than 2 hours. I upgraded bios and unraid to rc10 this past weekend and turned c-states back on and everything seems to be running fine other than the passthrough problems. Have not tried a linux vm after these updates yet though. This is all on 1700x, Asus B350m-a/csm, and 2x16GB Corsair Dominator Platinum
  21. In the end what was your process for passing through the GPU? I'm trying to pass through a lone 670 but I keep running into code 43.