Jump to content

scottw

Members
  • Posts

    205
  • Joined

  • Last visited

Posts posted by scottw

  1. Hello,

    I have been running Unraid for years and never had a cache drive, I know, I know....LOL. I didn't realize what I was missing.

     

    I have added an NVMe 512gb cache drive and moved my dockers and downloads folder onto it, and wow, what a difference. I now have another NVMe drive that is the exact same one as the first. Should I mirror that existing drive for redundancy or create another cache pool (with 1 drive in each) and split the downloads folder and docker up? Almost feels like a waste to have a 20gb docker image file on a 512gb drive that will probably never grow, but I will take the advice of someone else.

     

    Thanks,

    Scott

  2. Hello. I am a bit over my head here and could use some help.

    I created a SQL Server 2019 docker by following these instructions:
    https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver15&pivots=cs1-bash

     

    Everything has been working great for a couple of years now. I am now learning more and realized I increased my docker image to a size too big (and wasteful) due to my lack of knowledge. I would now like to size that docker image down and know how to do it the preferred way. Since this docker container was not created through Community Apps, I would have to do it from scratch....not a problem. I would however like to save the DB inside of that container. I can connect to that DB with SQL Server Management Studio but have no idea how to backup the DB OUTSIDE of the docker container...LOL

    I realize this should be simple but I am just not up to speed with all of that yet...but am learning. I understand how the Volume Mapping works but it appears my docker container does not have any mapped. Is this something I could do manually to accomplish what I want to accomplish or is there and easier way. Like I said, I just need to be able to backup my Sql DB somewhere safe so I can restore it after I recreate the Docker.img file.

    Here is the container, if that helps:

     

    image.thumb.png.da0a0456d9a313c71790b6fe51b90ec7.png

     

    Thanks,

    Scott

  3. So sorry if that has been asked before.

     

    I disabled DNS Rebinding on my Fios router and was able to setup remote access just fine. My question is I assume I leave DNS Rebinding disabled to keep this working. Am I opening up a security risk I should be worried about? Do people just leave this disabled with no issue?

     

     

    Sorry for such a basic question,

     

    Scott 

  4. Thanks, I did just that. I am running into an Unmountable Boot Disk now on that disk but I created another topic on that as I think it may be seperate issue? Or should I delete that post and put the details here?

     

    Thanks again for all of your help!

    Scott

  5. I was using the Parity swap method to replace a bad drive with a larger than parity drive so I followed the guide. I "copied" to the new parity and after that was done, it was doing a data rebuild on the Old parity drive (in the bad disks slot) and now it shows this:
    image.thumb.png.94279500f2568cf949997795701768da.png

     

    I attached my Diag.

     

     

    EDIT: I should also point out that the Unmountable Disk was my old Parity drive (that I thought was still good) but I do have a new 4tb drive I had planned to replace that disk (disk1) with after this process was finished. Don't know if thats still an option or if I lost everything on disk1 already.

     

    EDIT2: Did an XFSRepair in Maintence Mode with the -n switch and this was the output:
    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
    agf_freeblks 29078110, counted 29078240 in ag 0
    agf_freeblks 6819842, counted 6819974 in ag 2
    agf_freeblks 4968062, counted 4937793 in ag 1
    agf_freeblks 18856952, counted 18857398 in ag 3
    agi_freecount 24, counted 107 in ag 0
    agi_freecount 24, counted 107 in ag 0 finobt
    agi_freecount 95, counted 94 in ag 1
    agi_freecount 95, counted 94 in ag 1 finobt
    agi_freecount 113, counted 122 in ag 2
    agi_freecount 113, counted 122 in ag 2 finobt
    sb_fdblocks 59198793, counted 59693421
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
    Metadata CRC error detected at xfs_dir3_data block 0x50/0x1000
    corrupt block 0 in directory inode 102
        would junk block
    no . entry for directory 102
    no .. entry for directory 102
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 3
            - agno = 2
    corrupt block 0 in directory inode 102
        would junk block
    no . entry for directory 102
    no .. entry for directory 102
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
    Metadata CRC error detected at xfs_dir3_data block 0x50/0x1000
    wrong FS UUID, directory inode 102 block 80
    bad hash table for directory inode 102 (no data entry): would rebuild
    would create missing "." entry in dir ino 102
    entry "Science Fair" in dir ino 3221225570 doesn't have a .. entry, will set it in ino 102.
    wrong FS UUID, directory inode 102 block 80
    would create missing "." entry in dir ino 102
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    disconnected inode 130, would move to lost+found
    disconnected inode 131, would move to lost+found
    disconnected inode 132, would move to lost+found
    disconnected inode 133, would move to lost+found
    disconnected inode 134, would move to lost+found
    disconnected inode 135, would move to lost+found
    disconnected inode 136, would move to lost+found
    disconnected inode 137, would move to lost+found
    disconnected inode 138, would move to lost+found
    disconnected inode 139, would move to lost+found
    disconnected inode 140, would move to lost+found
    disconnected inode 141, would move to lost+found
    disconnected inode 142, would move to lost+found
    disconnected inode 143, would move to lost+found
    disconnected inode 144, would move to lost+found
    disconnected inode 145, would move to lost+found
    disconnected inode 146, would move to lost+found
    disconnected inode 147, would move to lost+found
    disconnected inode 148, would move to lost+found
    disconnected inode 149, would move to lost+found
    disconnected inode 150, would move to lost+found
    disconnected inode 151, would move to lost+found
    disconnected inode 152, would move to lost+found
    disconnected inode 153, would move to lost+found
    disconnected inode 154, would move to lost+found
    disconnected inode 155, would move to lost+found
    disconnected inode 156, would move to lost+found
    disconnected inode 157, would move to lost+found
    disconnected inode 158, would move to lost+found
    disconnected inode 159, would move to lost+found
    disconnected inode 1088992, would move to lost+found
    disconnected inode 1088993, would move to lost+found
    disconnected inode 1088994, would move to lost+found
    disconnected inode 1088995, would move to lost+found
    disconnected inode 1088996, would move to lost+found
    disconnected inode 1088997, would move to lost+found
    disconnected inode 1088998, would move to lost+found
    disconnected inode 1088999, would move to lost+found
    disconnected inode 1089000, would move to lost+found
    disconnected inode 1089001, would move to lost+found
    disconnected inode 1089002, would move to lost+found
    disconnected inode 1089003, would move to lost+found
    disconnected inode 1089004, would move to lost+found
    disconnected inode 1089005, would move to lost+found
    disconnected inode 1089006, would move to lost+found
    disconnected inode 1089007, would move to lost+found
    disconnected inode 1089008, would move to lost+found
    disconnected inode 1089009, would move to lost+found
    disconnected inode 1089010, would move to lost+found
    disconnected inode 1089011, would move to lost+found
    disconnected inode 1089012, would move to lost+found
    disconnected inode 1089013, would move to lost+found
    disconnected inode 1089014, would move to lost+found
    disconnected inode 1089015, would move to lost+found
    disconnected inode 1089016, would move to lost+found
    disconnected inode 1089017, would move to lost+found
    disconnected inode 1089018, would move to lost+found
    disconnected inode 1089019, would move to lost+found
    disconnected inode 1089020, would move to lost+found
    disconnected inode 1089021, would move to lost+found
    disconnected inode 1089022, would move to lost+found
    disconnected inode 1089023, would move to lost+found
    disconnected inode 1089024, would move to lost+found
    disconnected inode 1089025, would move to lost+found
    disconnected inode 1089026, would move to lost+found
    disconnected inode 1089027, would move to lost+found
    disconnected inode 1089028, would move to lost+found
    disconnected inode 1089029, would move to lost+found
    disconnected inode 1089030, would move to lost+found
    disconnected inode 1089031, would move to lost+found
    disconnected inode 1089032, would move to lost+found
    disconnected inode 1089033, would move to lost+found
    disconnected inode 1089034, would move to lost+found
    disconnected inode 1089035, would move to lost+found
    disconnected inode 1089036, would move to lost+found
    disconnected inode 1089037, would move to lost+found
    disconnected inode 1089038, would move to lost+found
    disconnected inode 1089039, would move to lost+found
    disconnected inode 1089040, would move to lost+found
    disconnected inode 1089041, would move to lost+found
    disconnected inode 1089042, would move to lost+found
    disconnected inode 1089043, would move to lost+found
    disconnected inode 1089044, would move to lost+found
    disconnected inode 1089045, would move to lost+found
    disconnected inode 1089046, would move to lost+found
    disconnected inode 1089047, would move to lost+found
    disconnected inode 1089048, would move to lost+found
    disconnected inode 1089049, would move to lost+found
    disconnected inode 1089050, would move to lost+found
    disconnected dir inode 2149391464, would move to lost+found
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

     

    Thanks,

    Scott

    unraid-diagnostics-20200930-1517.zip

  6. Running step 14 now but have question when it is done.

    image.thumb.png.c2f320514a93229cbc94df3c1c798a93.png
    I have a new 4tb drive in for the new parity and the old parity drive that I am copying from will be removed from the system. I have another 4tb drive that I am replacing the bad drive with. 

    After step 14, can I power down, replace the old parity drive with the new 4tb and then resume to step 15. 

    Or should I just follow the process and rebuld onto the old parity drive and then replace it after the entire process is done.

     

    I hope that makes sense :)

     

    Scott

  7. Again, thanks for your help. I put the original parity drive back in, did the New config and started the array with the "trust parity" option.

    It took a while but I was able to get the array to start with the "old" (original parity drive) but the "bad" disk is now showing disabled.

    image.thumb.png.1ef7651a813a27dbe71e5cb0f5438d31.png

     

    I have 2 new 4tb drives but dont think I can replace that bad drive with one of those yet because my parity is only 2tb, right?

     

    Just verifiing, before I cause anymore damage :), what I should do next.

     

    Thanks again!!!

     

    Scott

  8. Having a hard time starting the array. 

    I shut down the machine, removed the "new" parity drive and put the old one back in. Started the machine and it has been stuck for 30 minutes trying to start the array. I think the failing drive may be causing issues with it. Can I remove the failing drive, boot up, re-assign the old parity disk and do the new config without losing data?

    I am also trying to turn off the auto-start of the disks but it is not letting me change anything while the array is stuck starting.

     

    Sorry for all of the questions,

    Scott

  9. Thanks! Just trying to learn but how is putting the old drive back in better than syncing with this new drive? If I put the old drive back in, will it not try to rebuild that drive from scratch or will it use the data thats already on the old parity drive?

     

    I plan to do what you said, just trying to learn :)

     

    Thanks for the help,

    Scott

  10. I have 5 2tb drives (including 1 parity) running on Unraid 6.5.3
    I have a drive that is giving errors and looks to be failing. I bought 2 4tb drives to replace the failing drive and the parity. I started by replacing the parity because I cant replace the failing drive with a bigger drive before replacing the parity.
    I am in the middle of a parity rebuild and it is taking forever because of the "bad" drive. It ran overnight and got to 35% and the ETA jumps between 9 hours and 350 days. Do I just leave it go or should I (can I) put the old parity back in and figure out how to replace the failing drive first?

     

    Thanks,

    Scott 

  11. Sorry, I wasn't trying to be a jerk ?

    I normally update from the Docker page when it shows there is an update available. Sorry, I am fairly new to Docker in general.

    Will it show there is an update there (it currently does not) or do I have to update it another way. I currently have 3.2.28 and inside emby it says 3.2.30 is avaialble but I'm pretty sure I'm not supposed to update through Emby directly. The Docker page show up to date.

    Thanks for being patient with me.

  12. I know this is a stupid question but I am new to VPN. I set all of this up and did not run into any problems. My question is this though, I know you are connected through a VPN when the torrent is downloading but do you need to worry about when you actually click the link on your browser that is not VPN connected. For example, I am on my laptop with no vpn and in Chrome I click a torrent link which automatically sends it to Deluge on my Unraid box which is connected through VPN. My laptop was not connected to VPN when I clicked the link, is that a problem?

     

    Thanks!

  13. 16 hours ago, gridrunner said:

    Hi @scottw I find for me the windows client doent work well. Download the .ovpn file (Yourself (user-locked profile) Then download from here https://openvpn.net/index.php/open-source/downloads.html   and use that software with the .ovpn file. Works great.

    @gridrunner, thanks I will give that a shot. I would prefer to enter the password each time for security reasons. Using the ovpn on my iPhone prompts for password but I never thought of using that on Windows. I thought I had to use the pre-made msi.

     

    Thanks,

    Scott

×
×
  • Create New...