SidebandSamurai

Members
  • Posts

    269
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SidebandSamurai's Achievements

Contributor

Contributor (5/14)

5

Reputation

1

Community Answers

  1. A long long time ago in a galaxy far far away there was a plex server that was running and running and running. Everything was good with the universe until the dreaded exclemation marks appared on my move TV and music folders. Then things be came dark and forbrooding. So as you can see from my introduction, I have had a Plex media server installed on Unraid for roughly 6 years. Its been working great. One night I started to see exclemation marks next to all my media libraries. I could not see any media at all. I started to troubleshoot the exclemation mark issue when I deleted my plex server from authorized devices as a troubleshooting step. That was a bad thing. I ended up loosing complete control of my server. I found an article on the plex site were you can strip out certain sections of the Preferences.xml file to gain control of the server back. Though now I can see the server, but when I tried to reclaim the server it does not reclaim it. The one thing I would prefere not to do is remove the docker container and rebuild it from scratch, as I will loose all the markers for what I have watched. It would not be a huge loss but it would be nice to keep that information. Can any one help me?
  2. @NecroticThe drive I want to replace is an old 3T drive that I want to remove as it is starting to experience smart errors. The unbalance plugin sounds like a real solution that I can use. It will make this process much easier. Thank you for the suggestion.
  3. Good afternoon everyone. I have 1 final drive that I need to convert from RiserFS to XFS. My 2nd to last drive failed along with my parity so I ended up begin forced to convert the drive to XFS. Frankly I don't care if the parity is valid or not. I just want this file system off my server. My thought was ... Pull the 3T drive that has RiserFS on it. Execute a new config leaving that slot open for a new 10T drive I have coming. In this state the parity will rebuild with the remaining 6 drives. Once the new 10T drive arrives I will pre-clear the drive making sure its healthy then add it to the array in the same slot as the old 3T drive was in and re-start the array, which will again cause a parity rebuild. Next I will use rsync to copy all the files from the old 3T RiserFS drive to the new 10T XFS drive. I am fortunate as I only had 2 RiserFS drives left in my system all the rest I have been using XFS ever since it was available. Any thoughts on this process? I have taken a look a the documentation (not all of it mind you that is a lot of reading) but since I have already been through this process I could do it one more time.
  4. Well I get to answer my very own question. I went to dinner and came back to address this problem further. I ran across an article were an NTFS disk was recovered with ddrescue but the person could not mount it. I then ran across this article Guide to Using DDRescue to Recover Data I read this article from beginning to end looking for a clue as to what i might be doing wrong. There is a section called "Working with image files containing multiple partitions" After reading this section, I decided to follow the steps in this section to see what I would find out. I moved into the recovery folder on disk1 cd /mnt/disk1/recovery/ Next I executed this command parted disk2.img The server responded with: GNU Parted 3.3 Using /mnt/disk1/recovery/disk2.img Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) The next command in Parted was: unit Parted responded with: Unit? [compact]? The next command in Parted was: B Parted responded with: (parted) Next command to Parted is: print Parted responded with this information Model: (file) Disk /mnt/disk1/recovery/disk2.img: 3000592982016B Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 32768B 3000592965119B 3000592932352B reiserfs (parted) at this pont i only see one partition but having read this article I remembered that this has an offset. See the start is at 32768. but my mount command did not have an offset. just a little further down the article was the command I hoped I was looking for: mount -o loop,ro,offset=32768 /mnt/disk1/recovery/disk2.img /mnt/disks/drive2 Note with this mount command I am mounting it as ReadOnly (ro) and offering the start offset of 32768 After an agnizing minute as the drive1 spun up and started mounting the image I was greeted with the following in dmesg: [32209.362880] REISERFS (device loop4): found reiserfs format "3.6" with standard journal [32209.362890] REISERFS (device loop4): using ordered data mode [32209.362890] reiserfs: using flush barriers [32209.363866] REISERFS (device loop4): journal params: device loop4, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 [32209.363998] REISERFS (device loop4): checking transaction log (loop4) [32209.397773] REISERFS (device loop4): Using r5 hash to sort names I thought to myself, did it mount? I went to /mnt/disks/drive2 and viola, I found my files. Yea baby I am back in business! A couple of things that made this attempt successful. I set Parted to byte mode (not kilobyte) which gave me the precise location of the offset. setting the image to read only (ro) protects the image from being written to. but the key to making this successful was the offset=32768. The mount command was able to locate the correct area to read and the mount completed successfully. Thanks to @SpaceInvaderOne I was following his video on how to setup Krusader on my friends server. That video saved me a whole lot of headache. This server is being setup for a friend and I am doing a lot of things I did not do on my server. One of those was to setup short cuts in Krusader. Thses short cuts made it easy as pie to copy my files from my friends server over to my server. After the recovery is finished its time to renovate my server and make it better.
  5. I have a question I am struggeling with. I had a 3T drive experience errors, and at the same time I had a failed parity drive. I removed both drives. I replaced the parity drive with a 10T drive. I did a new config so that my array can rebuild the parity of the existing data. Now I had the bad parity drive and Drive2. Drive 2 I ran a ddrescue successfully recovering 99.9 percent of the drive. Its called drive2.img Now I want to mount this image and copy the data off the image. How can I do this. I have been googling for a couple of hours but I am not getting anyware fast. I read briefly that the UD plugin can mount these images, but I have not found out how yet. Can someone help me with a solution? I currently have the image on another unraid server it is sitting in /mnt/disk1/recovery. I have attempted to use this command: mount -o loop /mnt/disk1/recovery/disk2.img /mnt/disks/drive2 which resulted in this error mount: /mnt/disks/drive2: wrong fs type, bad option, bad superblock on /dev/loop4, missing codepage or helper program, or other error. i have also ran this command parted disk2.img print Which resulted in this message: Model: (file) Disk /mnt/disk1/recovery/disk2.img: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 32.8kB 3001GB 3001GB reiserfs I have also run this command as well blkid disk2.img Which resulted in this message: disk2.img: PTUUID="d5a8d2fd-b918-4458-b8ff-bb7f65e0b171" PTTYPE="gpt" I have also tried this command: mount -t reiserfs -o loop /mnt/disk1/recovery/disk2.img /mnt/disks/drive2 Which resulted in this error: mount: /mnt/disks/drive2: wrong fs type, bad option, bad superblock on /dev/loop4, missing codepage or helper program, or other error. I have looked in dmesg to find any clues and found this message. [ 2198.619991] REISERFS warning (device loop4): sh-2021 reiserfs_fill_super: can not find reiserfs on loop4 [ 8824.536014] REISERFS warning (device loop4): sh-2021 reiserfs_fill_super: can not find reiserfs on loop4 At this point i am not sure if the file system is corrupt or I am just entering the commands incorrectly. any help is most appreaciated.
  6. I am updating this post to show the full process of recovering a drive. I used my friends unraid server. I installed NerdPack Gui from community apps, then I installed ddrescue. I shut down the server and installed the defective drive2 for recovery. Once the system was rebooted I started a Unraid terminal session and executed this command root@Risa:/mnt/disk1# ddrescue -d /dev/sdf /mnt/disk1/recovery/disk2.img /mnt/disk1/recovery/disk2.log This was my first pass,. and resulted in this: GNU ddrescue 1.23 Press Ctrl-C to interrupt ipos: 2931 GB, non-trimmed: 0 B, current rate: 0 B/s opos: 2931 GB, non-scraped: 0 B, average rate: 100 MB/s non-tried: 0 B, bad-sector: 32768 B, error rate: 256 B/s rescued: 3000 GB, bad areas: 4, run time: 8h 16m 52s pct rescued: 99.99%, read errors: 67, remaining time: 0s time since last successful read: 1m 4s Finished Next I ran this command. root@Risa:/mnt/disk1# ddrescue -d -r 3 /dev/sdf /mnt/disk1/recovery/disk2.img /mnt/disk1/recovery/disk2.log You will notice teh -r 3 option. this tells ddresuce to retry all bad areas 3 times. Notice allso I am using the same log from the first time. This log tells ddrescue where the bad sectors are and to retry them. GNU ddrescue 1.23 Press Ctrl-C to interrupt Initial status (read from mapfile) rescued: 3000 GB, tried: 32768 B, bad-sector: 32768 B, bad areas: 4 Current status ipos: 2931 GB, non-trimmed: 0 B, current rate: 0 B/s opos: 2931 GB, non-scraped: 0 B, average rate: 7 B/s non-tried: 0 B, bad-sector: 28672 B, error rate: 170 B/s rescued: 3000 GB, bad areas: 5, run time: 8m 41s pct rescued: 99.99%, read errors: 171, remaining time: n/a time since last successful read: 3m 56s Finished If you will notice after the 2nd command, it only spent 8 minutes retrying bad sectors. Since this did not recover any data, I will leave it as it is, mount the resulting image "disk2.img" and copy the data from that image to the new drive on my server once its in the array. Thanks for all your help.
  7. @fabianonline As of 6.0.1 Agent 2 is available: https://www.zabbix.com/download_agents
  8. Well this is what the otherside looks like 🙂 Everything is going smoothly so far. What I did when I executed a new config I used the option to keep all drive assignments. This helped grately in putting all the drives in there original slots except for Drive 2. The Drive 2 slot is empty just like I want and when the new 10T drive comes in I will place it in there. My thoughts for the failing Drive 2, I would use ddRescue to create a rescue image of the drive. Then I could open the image and copy the files out of there, as I don't have another 3T drive I can use for this process. I do have friends unraid server I am building for her that I could use for the recovery and place the image on the array. Thoughts on this process?
  9. Great, I have backed up my flash drive, I have created a text file with ls * -Ralh > /boot/disk2.txt which created a disk2.txt which I also have on my local machine. This is a listing of all the files on the drive before I pull it.. Wish I had CRCs of each file but I don't. So I am not sure which files will be corrupted. Wish me luck and see you on the other side.
  10. Thanks for all your responses. so the plan of action would be ... Pull the failing disk 2 Pull the bad Parity drive. Do a new config build an array with the new 10T drive and the remeainng drives. recover as much data from Drive 2 add a new 10T drive (yes the minimum drive will now be 10T) to take the place of the old 3T drive copy recovered data back to the array. Would there be any other suggestions? If I do a new config, what would happen with my VMs and Dockers, plugins and how everything is setup?
  11. @trurl Now this does not make any sense. so let me ask you a question. in unraid when you have a parity protected array, the parity must be the largest drive in the array. Correct? Going on that assumption, I can not use a 3T cloaned disk to a 10T drive because (and my apologies because you can see this in the original posting of my diagnostics) because the drive was removed from the system, My parity drive is only 3T. Its serial number is W1F12655 My pre-clear has finished. my 10T drive did 3 pre-clear cycles with no errors. I have re-installed the failing 3T parity drive in exactly the same slot it came out of and re-ran the diagnostics. Please advise. Thank you for the time you have spent with me. It is appreaciated. davyjones-diagnostics-20220305-2322.zip
  12. @JorgeB The method you stated above is only if the original parity drive is still good, right? Right now I am waiting on my new 10T to complete a pre-clear which will take a couple of days.
  13. No problem. My 10T drive is still in pre-clear. when it finishes I will put the 3T parity back in the system. It will take a few days.
  14. @JorgeB So this is the Chicken before the egg scenario. So If I clone the drive I will loose the whole array because at that point two drives will have now "failed". Sick with me here. The reason I say this is If I choose to clone this drive to a larger 10T drive the array will not accept the drive because the partition is not the full capacity of the drive. thus the array is now down. This looks to me the only solution is to rebuild the parity just as it is then replace the failing drive. Would this be correct? or would I have to expand the partition on the new drive to occupy the entire disk. THEN unraid will be sort of happy. As I have not completely read your article on ddrescue I might have missed a step. I don't have space on the remaining good drives, they are all full.