scottw

Members
  • Posts

    205
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

scottw's Achievements

Explorer

Explorer (4/14)

2

Reputation

1

Community Answers

  1. Yeah I was just reading about Appdata backup. That was my major concern. Having my dockers and appdata on an “unprotected” single drive. Having the appdata backed up to the array on a schedule would eliminate those concerns for me. Thanks, Scott
  2. Hello, I have been running Unraid for years and never had a cache drive, I know, I know....LOL. I didn't realize what I was missing. I have added an NVMe 512gb cache drive and moved my dockers and downloads folder onto it, and wow, what a difference. I now have another NVMe drive that is the exact same one as the first. Should I mirror that existing drive for redundancy or create another cache pool (with 1 drive in each) and split the downloads folder and docker up? Almost feels like a waste to have a 20gb docker image file on a 512gb drive that will probably never grow, but I will take the advice of someone else. Thanks, Scott
  3. Just to follow up in case it helps someone else. I was able to use the docker cp command to copy from the container to the local filesystem. Worked perfectly! Scott
  4. Yeah, exactly what I was thinking but have no idea how to do that. I know how to do it by editing a “Community Apps” docker but not this one, it just just allows me to get to the console. any idea how to do that or somewhere I can read up on that? Thanks, Scott
  5. Hello. I am a bit over my head here and could use some help. I created a SQL Server 2019 docker by following these instructions: https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver15&pivots=cs1-bash Everything has been working great for a couple of years now. I am now learning more and realized I increased my docker image to a size too big (and wasteful) due to my lack of knowledge. I would now like to size that docker image down and know how to do it the preferred way. Since this docker container was not created through Community Apps, I would have to do it from scratch....not a problem. I would however like to save the DB inside of that container. I can connect to that DB with SQL Server Management Studio but have no idea how to backup the DB OUTSIDE of the docker container...LOL I realize this should be simple but I am just not up to speed with all of that yet...but am learning. I understand how the Volume Mapping works but it appears my docker container does not have any mapped. Is this something I could do manually to accomplish what I want to accomplish or is there and easier way. Like I said, I just need to be able to backup my Sql DB somewhere safe so I can restore it after I recreate the Docker.img file. Here is the container, if that helps: Thanks, Scott
  6. So sorry if that has been asked before. I disabled DNS Rebinding on my Fios router and was able to setup remote access just fine. My question is I assume I leave DNS Rebinding disabled to keep this working. Am I opening up a security risk I should be worried about? Do people just leave this disabled with no issue? Sorry for such a basic question, Scott
  7. Thank you, thank you, thank you! I think I am finally back up. My docker was dead but I was able to delete and re-create it and just restore from Previous Apps. Thanks again! Scott
  8. Thanks, I did just that. I am running into an Unmountable Boot Disk now on that disk but I created another topic on that as I think it may be seperate issue? Or should I delete that post and put the details here? Thanks again for all of your help! Scott
  9. I was using the Parity swap method to replace a bad drive with a larger than parity drive so I followed the guide. I "copied" to the new parity and after that was done, it was doing a data rebuild on the Old parity drive (in the bad disks slot) and now it shows this: I attached my Diag. EDIT: I should also point out that the Unmountable Disk was my old Parity drive (that I thought was still good) but I do have a new 4tb drive I had planned to replace that disk (disk1) with after this process was finished. Don't know if thats still an option or if I lost everything on disk1 already. EDIT2: Did an XFSRepair in Maintence Mode with the -n switch and this was the output: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... agf_freeblks 29078110, counted 29078240 in ag 0 agf_freeblks 6819842, counted 6819974 in ag 2 agf_freeblks 4968062, counted 4937793 in ag 1 agf_freeblks 18856952, counted 18857398 in ag 3 agi_freecount 24, counted 107 in ag 0 agi_freecount 24, counted 107 in ag 0 finobt agi_freecount 95, counted 94 in ag 1 agi_freecount 95, counted 94 in ag 1 finobt agi_freecount 113, counted 122 in ag 2 agi_freecount 113, counted 122 in ag 2 finobt sb_fdblocks 59198793, counted 59693421 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 Metadata CRC error detected at xfs_dir3_data block 0x50/0x1000 corrupt block 0 in directory inode 102 would junk block no . entry for directory 102 no .. entry for directory 102 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 corrupt block 0 in directory inode 102 would junk block no . entry for directory 102 no .. entry for directory 102 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... Metadata CRC error detected at xfs_dir3_data block 0x50/0x1000 wrong FS UUID, directory inode 102 block 80 bad hash table for directory inode 102 (no data entry): would rebuild would create missing "." entry in dir ino 102 entry "Science Fair" in dir ino 3221225570 doesn't have a .. entry, will set it in ino 102. wrong FS UUID, directory inode 102 block 80 would create missing "." entry in dir ino 102 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 130, would move to lost+found disconnected inode 131, would move to lost+found disconnected inode 132, would move to lost+found disconnected inode 133, would move to lost+found disconnected inode 134, would move to lost+found disconnected inode 135, would move to lost+found disconnected inode 136, would move to lost+found disconnected inode 137, would move to lost+found disconnected inode 138, would move to lost+found disconnected inode 139, would move to lost+found disconnected inode 140, would move to lost+found disconnected inode 141, would move to lost+found disconnected inode 142, would move to lost+found disconnected inode 143, would move to lost+found disconnected inode 144, would move to lost+found disconnected inode 145, would move to lost+found disconnected inode 146, would move to lost+found disconnected inode 147, would move to lost+found disconnected inode 148, would move to lost+found disconnected inode 149, would move to lost+found disconnected inode 150, would move to lost+found disconnected inode 151, would move to lost+found disconnected inode 152, would move to lost+found disconnected inode 153, would move to lost+found disconnected inode 154, would move to lost+found disconnected inode 155, would move to lost+found disconnected inode 156, would move to lost+found disconnected inode 157, would move to lost+found disconnected inode 158, would move to lost+found disconnected inode 159, would move to lost+found disconnected inode 1088992, would move to lost+found disconnected inode 1088993, would move to lost+found disconnected inode 1088994, would move to lost+found disconnected inode 1088995, would move to lost+found disconnected inode 1088996, would move to lost+found disconnected inode 1088997, would move to lost+found disconnected inode 1088998, would move to lost+found disconnected inode 1088999, would move to lost+found disconnected inode 1089000, would move to lost+found disconnected inode 1089001, would move to lost+found disconnected inode 1089002, would move to lost+found disconnected inode 1089003, would move to lost+found disconnected inode 1089004, would move to lost+found disconnected inode 1089005, would move to lost+found disconnected inode 1089006, would move to lost+found disconnected inode 1089007, would move to lost+found disconnected inode 1089008, would move to lost+found disconnected inode 1089009, would move to lost+found disconnected inode 1089010, would move to lost+found disconnected inode 1089011, would move to lost+found disconnected inode 1089012, would move to lost+found disconnected inode 1089013, would move to lost+found disconnected inode 1089014, would move to lost+found disconnected inode 1089015, would move to lost+found disconnected inode 1089016, would move to lost+found disconnected inode 1089017, would move to lost+found disconnected inode 1089018, would move to lost+found disconnected inode 1089019, would move to lost+found disconnected inode 1089020, would move to lost+found disconnected inode 1089021, would move to lost+found disconnected inode 1089022, would move to lost+found disconnected inode 1089023, would move to lost+found disconnected inode 1089024, would move to lost+found disconnected inode 1089025, would move to lost+found disconnected inode 1089026, would move to lost+found disconnected inode 1089027, would move to lost+found disconnected inode 1089028, would move to lost+found disconnected inode 1089029, would move to lost+found disconnected inode 1089030, would move to lost+found disconnected inode 1089031, would move to lost+found disconnected inode 1089032, would move to lost+found disconnected inode 1089033, would move to lost+found disconnected inode 1089034, would move to lost+found disconnected inode 1089035, would move to lost+found disconnected inode 1089036, would move to lost+found disconnected inode 1089037, would move to lost+found disconnected inode 1089038, would move to lost+found disconnected inode 1089039, would move to lost+found disconnected inode 1089040, would move to lost+found disconnected inode 1089041, would move to lost+found disconnected inode 1089042, would move to lost+found disconnected inode 1089043, would move to lost+found disconnected inode 1089044, would move to lost+found disconnected inode 1089045, would move to lost+found disconnected inode 1089046, would move to lost+found disconnected inode 1089047, would move to lost+found disconnected inode 1089048, would move to lost+found disconnected inode 1089049, would move to lost+found disconnected inode 1089050, would move to lost+found disconnected dir inode 2149391464, would move to lost+found Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Thanks, Scott unraid-diagnostics-20200930-1517.zip
  10. Running step 14 now but have question when it is done. I have a new 4tb drive in for the new parity and the old parity drive that I am copying from will be removed from the system. I have another 4tb drive that I am replacing the bad drive with. After step 14, can I power down, replace the old parity drive with the new 4tb and then resume to step 15. Or should I just follow the process and rebuld onto the old parity drive and then replace it after the entire process is done. I hope that makes sense Scott
  11. Yup, I read through it an think I understand it. I made so many mistakes up to this point I just wanted reassurance first Will get into it in a bit. Thanks! Scott
  12. OK thanks. I just wanted to make sure I can still do that with the disabled disk. That is different from what I orginally told you so I wanted to ask. Thanks! Scott
  13. Again, thanks for your help. I put the original parity drive back in, did the New config and started the array with the "trust parity" option. It took a while but I was able to get the array to start with the "old" (original parity drive) but the "bad" disk is now showing disabled. I have 2 new 4tb drives but dont think I can replace that bad drive with one of those yet because my parity is only 2tb, right? Just verifiing, before I cause anymore damage :), what I should do next. Thanks again!!! Scott
  14. Having a hard time starting the array. I shut down the machine, removed the "new" parity drive and put the old one back in. Started the machine and it has been stuck for 30 minutes trying to start the array. I think the failing drive may be causing issues with it. Can I remove the failing drive, boot up, re-assign the old parity disk and do the new config without losing data? I am also trying to turn off the auto-start of the disks but it is not letting me change anything while the array is stuck starting. Sorry for all of the questions, Scott
  15. Excellent, thanks for the explaination.