SidebandSamurai

Members
  • Posts

    269
  • Joined

  • Last visited

Posts posted by SidebandSamurai

  1. A long long time ago in a galaxy far far away there was a plex server that was running and running and running.  Everything was good with the universe until the dreaded exclemation marks appared on my move TV and music folders.  Then things be came dark and forbrooding.

     

    So as you can see from my introduction, I have had a Plex media server installed on Unraid for roughly 6 years.  Its been working great.  One night I started to see exclemation marks next to all my media libraries.  I could not see any media at all. 

     

     I started to troubleshoot the exclemation mark issue when I deleted my plex server from authorized devices as a troubleshooting step.  That was a bad thing.  I ended up loosing complete control of my server.  I found an article on the plex site were you can strip out certain sections of the Preferences.xml file to gain control of the server back.  Though now I can see the server, but when I tried to reclaim the server it does not reclaim it.

     

    The one thing I would prefere not to do is remove the docker container and rebuild it from scratch, as I will loose all the markers for what I have watched.  It would not be a huge loss but it would be nice to keep that information.

     

    Can any one help me?

     

     

  2. Good afternoon everyone.

     

    I have 1 final drive that I need to convert from RiserFS to XFS.  My 2nd to last drive failed along with my parity so I ended up begin forced to convert the drive to XFS.

     

    Frankly I don't care if the parity is valid or not.  I just want this file system off my server.  My thought was ...

     

    Pull the 3T drive that has RiserFS on it.  Execute a new config leaving that slot open for a new 10T drive I have coming.  In this state the parity will rebuild with the remaining 6 drives.

     

    Once the new 10T drive arrives I will pre-clear the drive making sure its healthy then add it to the array in the same slot as the old 3T drive was in and re-start the array, which will again cause a parity rebuild.

     

    Next I will use rsync to copy all the files from the old 3T RiserFS drive to the new 10T XFS drive.

     

    I am fortunate as I only had 2 RiserFS drives left in my system  all the rest I have been using XFS ever since it was available.  

     

    Any thoughts on this process?  I have taken a look a the documentation (not all of it mind you that is a lot of reading) but since I have already been through this process I could do it one more time.

  3. Well I get to answer my very own question.

     

    I went to dinner and came back to address this problem further.  I ran across an article were an NTFS disk was recovered with ddrescue but the person could not mount it.  I then ran across this article

     

     Guide to Using DDRescue to Recover Data

     

    I read this article from beginning to end looking for a clue as to what i might be doing wrong.

     

    There is a section called "Working with image files containing multiple partitions"

     

    After reading this section, I decided to follow the steps in this section to see what I would find out.

     

    I moved into the recovery folder on disk1

     

    cd /mnt/disk1/recovery/

     

    Next I executed this command

     

    parted disk2.img

     

    The server responded with:

     

    GNU Parted 3.3
    Using /mnt/disk1/recovery/disk2.img
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) 

     

    The next command in Parted was:

     

    unit

     

    Parted responded with:

     

    Unit?  [compact]?

     

    The next command in Parted was:

     

    B

     

    Parted responded with:

     

    (parted) 

     

    Next command to Parted is:

     

    print

     

    Parted responded with this information

     

    Model:  (file)
    Disk /mnt/disk1/recovery/disk2.img: 3000592982016B
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags: 
    
    Number  Start   End             Size            File system  Name  Flags
     1      32768B  3000592965119B  3000592932352B  reiserfs
    
    (parted)                                                                  

     

    at this pont i only see one partition but having read this article I remembered that this has an offset.  See the start is at 32768.  but my mount command did not have an offset.  just a little further down the article was the command I hoped I was looking for:

     

    mount -o loop,ro,offset=32768 /mnt/disk1/recovery/disk2.img /mnt/disks/drive2

     

    Note with this mount command I am mounting it as ReadOnly (ro) and offering the start offset of 32768

     

    After an agnizing minute as the drive1 spun up and started mounting the image I was greeted with the following in dmesg:

     

    [32209.362880] REISERFS (device loop4): found reiserfs format "3.6" with standard journal
    [32209.362890] REISERFS (device loop4): using ordered data mode
    [32209.362890] reiserfs: using flush barriers
    [32209.363866] REISERFS (device loop4): journal params: device loop4, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
    [32209.363998] REISERFS (device loop4): checking transaction log (loop4)
    [32209.397773] REISERFS (device loop4): Using r5 hash to sort names

     

    I thought to myself, did it mount?

     

    I went to /mnt/disks/drive2 and viola, I found my files.

     

    Yea baby I am back in business!

     

    A couple of things that made this attempt successful.

     

    I set Parted to byte mode (not kilobyte) which gave me the precise location of the offset.

    setting the image to read only (ro) protects the image from being written to.  but the key to making this successful was the offset=32768.  The mount command was able to locate the correct area to read and the mount completed successfully.

     

    Thanks to @SpaceInvaderOne I was following his video on how to setup Krusader on my friends server. That video saved me a whole lot of headache.  This server is being setup for a friend and I am doing a lot of things I did not do on my server.  One of those was to setup short cuts in Krusader.  Thses short cuts made it easy as pie to copy my files from my friends server over to my server.  After the recovery is finished its time to renovate my server and make it better.

  4. I have a question I am struggeling with.

     

    I had a 3T drive experience errors, and at the same time I had a failed parity drive.  I removed both drives.  I replaced the parity drive with a 10T drive.  I did a new config so that my array can rebuild the parity of the existing data.

     

    Now I had the bad parity drive and Drive2.

     

    Drive 2 I ran a ddrescue successfully recovering 99.9 percent of the drive.  Its called drive2.img

     

    Now I want to mount this image and copy the data off the image.  How can I do this.  I have been googling for a couple of hours but I am not getting anyware fast.  I read briefly that the UD plugin can mount these images, but I have not found out how yet.

     

    Can someone help me with a solution?  I currently have the image on another unraid server  it is sitting in /mnt/disk1/recovery.

     

    I have attempted to use this command:

    mount -o loop /mnt/disk1/recovery/disk2.img /mnt/disks/drive2

    which resulted in this error
     

    mount: /mnt/disks/drive2: wrong fs type, bad option, bad superblock on /dev/loop4, missing codepage or helper program, or other error.

     

    i have also ran this command

     

    parted disk2.img print

     

    Which resulted in this message:

     

    Model:  (file)
    Disk /mnt/disk1/recovery/disk2.img: 3001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags: 
    
    Number  Start   End     Size    File system  Name  Flags
     1      32.8kB  3001GB  3001GB  reiserfs

     

    I have also run this command as well

    blkid disk2.img

     

    Which resulted in this message:

    disk2.img: PTUUID="d5a8d2fd-b918-4458-b8ff-bb7f65e0b171" PTTYPE="gpt"


    I have also tried this command:

     

    mount -t reiserfs -o loop /mnt/disk1/recovery/disk2.img /mnt/disks/drive2

     

    Which resulted in this error:
     

    mount: /mnt/disks/drive2: wrong fs type, bad option, bad superblock on /dev/loop4, missing codepage or helper program, or other error.

     

    I have looked in dmesg to find any clues and found this message.

     

    [ 2198.619991] REISERFS warning (device loop4): sh-2021 reiserfs_fill_super: can not find reiserfs on loop4
    [ 8824.536014] REISERFS warning (device loop4): sh-2021 reiserfs_fill_super: can not find reiserfs on loop4

     

    At this point i am not sure if the file system is corrupt or I am just entering the commands incorrectly.

     

    any help is most appreaciated.

  5. I am updating this post to show the full process of recovering a drive.

    I used my friends unraid server.  I installed NerdPack Gui from community apps, then I installed ddrescue.

     

    I shut down the server and installed the defective drive2 for recovery.

     

    Once the system was rebooted I started a Unraid terminal session and executed this command

     

    root@Risa:/mnt/disk1# ddrescue -d /dev/sdf /mnt/disk1/recovery/disk2.img /mnt/disk1/recovery/disk2.log

     

    This was my first pass,.  and resulted in this:

     

    GNU ddrescue 1.23
    Press Ctrl-C to interrupt
         ipos:    2931 GB, non-trimmed:        0 B,  current rate:       0 B/s
         opos:    2931 GB, non-scraped:        0 B,  average rate:    100 MB/s
    non-tried:        0 B,  bad-sector:    32768 B,    error rate:     256 B/s
      rescued:    3000 GB,   bad areas:        4,        run time:  8h 16m 52s
    pct rescued:   99.99%, read errors:       67,  remaining time:          0s
                                  time since last successful read:      1m  4s
    Finished                                      

     

    Next I ran this command.

     

    root@Risa:/mnt/disk1# ddrescue -d -r 3 /dev/sdf /mnt/disk1/recovery/disk2.img /mnt/disk1/recovery/disk2.log

     

    You will notice teh -r 3 option.  this tells ddresuce to retry all bad areas 3 times.  Notice allso I am using the same log from the first time.  This log tells ddrescue where the bad sectors are and to retry them.

     

    GNU ddrescue 1.23
    Press Ctrl-C to interrupt
    Initial status (read from mapfile)
    rescued: 3000 GB, tried: 32768 B, bad-sector: 32768 B, bad areas: 4
    
    Current status
         ipos:    2931 GB, non-trimmed:        0 B,  current rate:       0 B/s
         opos:    2931 GB, non-scraped:        0 B,  average rate:       7 B/s
    non-tried:        0 B,  bad-sector:    28672 B,    error rate:     170 B/s
      rescued:    3000 GB,   bad areas:        5,        run time:      8m 41s
    pct rescued:   99.99%, read errors:      171,  remaining time:         n/a
                                  time since last successful read:      3m 56s
    Finished                                   

     

    If you will notice after the 2nd command, it only spent 8 minutes retrying bad sectors.  Since this did not recover any data, I will leave it as it is, mount the resulting image "disk2.img" and copy the data from that image to the new drive on my server once its in the array.

     

    Thanks for all your help.

     

     

  6. Well this is what the otherside looks like 🙂  Everything is going smoothly so far.  What I did when I executed a new config I used the option to keep all drive assignments.  This helped grately in putting all the drives in there original slots except for Drive 2.   The Drive 2 slot is empty just like I want and when the new 10T drive comes in I will place it in there.

     

    My thoughts for the failing Drive 2, I would use ddRescue to create a rescue image of the drive.  Then I could open the image and copy the files out of there, as I don't have another 3T drive I can use for this process.  I do have friends unraid server I am building for her that I could use for the recovery and place the image on the array.

     

    Thoughts on this process?

  7. Great, I have backed up my flash drive, I have created a text file with 

    ls * -Ralh > /boot/disk2.txt

     

    which created a disk2.txt which I also have on my local machine.  This is a listing of all the files on the drive before I pull it..  Wish I had CRCs of each file but I don't.  So I am not sure which files will be corrupted.  Wish me luck and see you on the other side.

     

  8. Thanks for all your responses.  so the plan of action would be ...

    • Pull the failing disk 2
    • Pull the bad Parity drive.
    • Do a new config
    • build an array with the new 10T drive and the remeainng drives.
    • recover as much data from Drive 2
    • add a new 10T drive (yes the minimum drive will  now be 10T) to take the place of the old 3T drive
    • copy recovered data back to the array.

    Would there be any other suggestions?

     

    If I do a new config, what would happen with my VMs and Dockers, plugins and how everything is setup?

     

     

  9. @trurl

    Quote

    The method begins by syncing parity, and doesn't continue until after parity sync has completed.

     

    Then the cloned disk is rebuilt from parity so it has the correct partition.

    Now this does not make any sense.  so let me ask you a question.  in unraid when you have a parity protected array, the parity must be the largest drive in the array.  Correct?

    Going on that assumption, I can not use a 3T cloaned disk to a 10T drive because (and my apologies because you can see this in the original posting of my diagnostics) because the drive was removed from the system, My parity drive is only 3T.  Its serial number is W1F12655

     

    My pre-clear has finished.  my 10T drive did 3 pre-clear cycles with no errors.

    I have re-installed the failing 3T parity drive in exactly the same slot it came out of and re-ran the diagnostics.  Please advise.

     

    Thank you for the time you have spent with me.  It is appreaciated.

    davyjones-diagnostics-20220305-2322.zip

  10. @JorgeB

     

    Quote

    Unraid won't accept that since it requires the partition to use the full device capacity, but you can always copy data from the clone to other drives after mounting it with for example UD pluign.

    So this is the Chicken before the egg scenario.  So If I clone the drive I will loose the whole array because at that point two drives will have now "failed".  Sick with me here.  The reason I say this is If I choose to clone this drive to a larger 10T drive the array will not accept the drive because the partition is not the full capacity of the drive. thus the array is now down.

     

    This looks to me the only solution is to rebuild the parity just as it is then replace the failing drive.  Would this be correct?  or would I have to expand the partition on the new drive to occupy the entire disk.  THEN unraid will be sort of happy.  As I have not completely read your article on ddrescue I might have missed a step.

     

    I don't have space on the remaining good drives, they are all full.

  11. So I believe I will go the ddrescue route.  cloning the smaller drive to the bigger one is not an issue, after the clone is done will the drive show up as a 3T drive or a 10T drive in the array?

     

    I will have to locate another drive to put in as a temporary because I only have one 10T drive and that has to be the parity drive.

  12. @trurl

    Quote

    Since you seemed unaware of this, I assume you don't have Notifications setup to alert you immediately by email or other agent as soon as a problem is detected.

    yes guilty as charged.  I have since corrected this issue.  but its like closing the door after the horses have already left the barn.

     

    Quote

    Or you have been ignoring the notifications. You must take care of these problems as soon as they appear. Don't let an ignored problem become multiple problems and data loss.

    Yes this is also the case.  Until recently it has been a money issue.  I now have started to have a little discretionary income which is why it went so long without addressing it.  It was only after the Parity drive was disabled that I pushed for the replacement drive that is now testing.  My wife has promised me a second 10T drive next payday.  She knows how important this system is to the family. 

     

    Quote

    Also note that it's possible the only thing wrong with your original parity disk was a bad connection, but since it wasn't in diagnostics can't tell. If it is OK and not out-of-sync (nothing written to any array disks while it was disabled or missing) you could rebuild disk2 from that.

    No this was not a connection issue,  I am certain of that.  This system has been rock solid for 7 years.  Which means the 3T drives are the original drives in this system.  It has been in continuous use for the 7 years without problems. I believe they are 3T green drives if I remember correctly.  The cables I use all have locking tabs on them and I am using the 5x3 hotswap bay.  I have 3 of these bays in an mid tower case.  This was back when they made mid-tower cases with the ability to put that many drives in one case.

  13. This is an old topic, and I should not do this.  but ... 

     

    Speaking from experience.  On the subject of two parity disks.  It is good practice to have as much redundancy as you can afford in a system.  I currently have an unraid server that is sick now.  I have one disk with read errors and a failed parity drive.  Having that second parity drive might have avoided me the head ache of having to use dd-rescue to recover the failing drive.  This means the media server, which does way more than just serving media is down while I recover that drive.  This means the wife is mad at me because the internet and wifi are down.  and that is because I put my firewall on unraid.  maybe not a good decision but its done now and I have to deal with the wife's disapproval.   

     

    I really believe its all how valuable the Unraid server is.  If its throwaway data then maybe 1 parity drive is sufficient.  If it does more like my system, dual parity drives is just added insurance.

  14. Thank you very much for your quick reply.

     

    So what you are saying ... before I install the new parity drive I need to address the issue with the read errors first?

     

    Your article using DDRescue was excellent by the way.

     

    Right now as it stands, the system is up and running without parity and with Drive 2 failing with read errors.

  15. Good morning everyone.  

     

    I am looking for advice.  I have been using unraid for the past 7 years.  I love the  product.

     

    Last week I noticed that unraid had disabled my parity drive.  I purchased a new 10TB Ironwolf drive which arrived the next day.  Its currently in the pre-clear stage before I install it.

     

    Last night a parity check ran which was unexpected as the former parity drive is actually not in the system.  I have the server run parity checks every month on the 1st.  This has worked out for me perfectly.

     

    I have everything on my media server.  My internet router (PFSense VM) as well as Plex my media back end.   So far I am not experienceing any probelmes but I do have a concern.

     

    After the parity read ran last night Unraid reported:

     

    Last check completed on Tue 01 Mar 2022 08:48:10 AM PST (yesterday)
    Finding 384 errors  Duration: 8 hours, 48 minutes, 9 seconds. Average speed: 94.7 MB/sec
    
    Next check scheduled on Fri 01 Apr 2022 12:00:00 AM PDT
     Due in: 29 days, 16 hours, 56 minutes

     

    so ... for the size of array I felt that was not too bad.  but before I commision the new parity drive into service, I want to make sure that the system is ok and the Parity will not be incorrect when its re-calculated.

    Please find enclosed my Dignostics file.

     

    Thank you for your help.

     

    Sincerley,

     

    SidebandSamurai

    davyjones-diagnostics-20220302-0556.zip

  16. I found this item on Amazon:  https://www.amazon.com/gp/product/B08MBCX8RW/ref=ox_sc_act_title_1?smid=A1BKCYQGK0QCS0&psc=1  WD Easystore 14TB External USB 3.0 Hard Drive - Black WDBAMA0140HBK-NESN for $269.00 on US Amazon.  NewEgg was priced at $337.95.  BestBuy has them for $259.99.  Since I am going to replace my parity, I will buy only one of these and upgrade the rest of my drives with 8TB drives for now.

  17. 5 hours ago, x88dually said:

    You can not simply change the size of the Docker Image in the webUI back to 25g once its 75g, the size of the file does not shrink to the smaller value when you press the "Save" button..  So in the Web interface you have 25G but in reality, the file is still 75G in size.

     

    SpaceInvaderOne has an excellent YouTube video on Docker principles and setup.  Take a look at this video and let me know how you get along.

     

    I see that you are pretty new to UnRaid, welcome.  You might want to subscribe to SpaceInvaderOne's Channel, he has excellent information about unraid and he provides succinct instructions and good practices for anything you might do with Unraid. 

     

    Good Luck!

     

    • Like 1
  18. I am real sorry you are having this issue.  It can be pretty scary when your system starts to have problems.

    I am not attacking you and I would like to help.  Just to set the context of my post, I have been on 6.8.3 ever since it has been released (about 5 months)  My system has been rock solid.  Maybe you should change the name of this post to "Unraid not functional after upgrade to 6.8.3"  or something like that.

     

    I agree with bonienl who is helping you through this that it is not software, so if it is not software it is hardware.  

     

    I looked through the post and could not find your diagnostics.  You might want to post diagnostics so that we can see how your system is booting.  So here are my thoughts.  Slowness from UnRaid might indicate a bad or filled up flash drive.  I just experienced this issue a few days ago, while I was doing a pre-clear, the whole unraid interface was corrupted with errors reporting unable to write to files.  I ended up stopping and starting the web interface engine to clear it up.  But I don't think that is the issue.  If your flash drive is having issues this could cause the slowness and corruption you are seeing. 

     

    Have you tried to boot the flash drive in another system?  Yes it will not start because its missing the drives, but I was thinking that either the Flash drive is dying or the USB port is going out.  UnRaid runs in a RAM disk.  The other possibility is that you have bad RAM.  The RAM disk is created in this bad area of RAM and becomes corrupted.  If you boot this USB into another system, and see an improvement, you can eliminate the flash drive as the problem and think about the USB port or bad RAM.  If you still are having the problem, you could back up your config folder and reformat the drive, put 6.8.2 back on the USB drive and copy your config folder over and try that.  It is possible that the upgrade went sideways becoming corrupted.

     

    DO NOT use another flash drive, the license will no longer be valid on that new flash drive.  Reformat the old Flash drive and copy the config over to that freshly formatted flash drive.

    • Like 1
  19. I attempted to pre-clear 3 1.5TB drives I picked up from a friend whom was not using them any more.  After i started the Preclear (using the plug in from Community apps) 2 hours passed and the output slowed to 90mb a sec.  the unraid web interface became corrupted.  I had to restart those services to get control back to my server. I noticed in the log that the array is now out of space.  Lots of errors where it attmpted to write data but 0 bytes were written.  I did not have much to begin with but I had not realized that it would use what remaining space is on my array to make all the fancy graphics work correctly.

     

    How can I fix this.  Can I redirect at least temporarily where these files are stored to the cache drive so I can get these drives tested and into my array?

     

    Should I use BinHex's preclear docker instead?  I liked the other plug in because it integrated so well with the unraid GUI.

     

    Sincerely,

     

    Sideband Samurai