SidebandSamurai

Members
  • Posts

    269
  • Joined

  • Last visited

Everything posted by SidebandSamurai

  1. A long long time ago in a galaxy far far away there was a plex server that was running and running and running. Everything was good with the universe until the dreaded exclemation marks appared on my move TV and music folders. Then things be came dark and forbrooding. So as you can see from my introduction, I have had a Plex media server installed on Unraid for roughly 6 years. Its been working great. One night I started to see exclemation marks next to all my media libraries. I could not see any media at all. I started to troubleshoot the exclemation mark issue when I deleted my plex server from authorized devices as a troubleshooting step. That was a bad thing. I ended up loosing complete control of my server. I found an article on the plex site were you can strip out certain sections of the Preferences.xml file to gain control of the server back. Though now I can see the server, but when I tried to reclaim the server it does not reclaim it. The one thing I would prefere not to do is remove the docker container and rebuild it from scratch, as I will loose all the markers for what I have watched. It would not be a huge loss but it would be nice to keep that information. Can any one help me?
  2. @NecroticThe drive I want to replace is an old 3T drive that I want to remove as it is starting to experience smart errors. The unbalance plugin sounds like a real solution that I can use. It will make this process much easier. Thank you for the suggestion.
  3. Good afternoon everyone. I have 1 final drive that I need to convert from RiserFS to XFS. My 2nd to last drive failed along with my parity so I ended up begin forced to convert the drive to XFS. Frankly I don't care if the parity is valid or not. I just want this file system off my server. My thought was ... Pull the 3T drive that has RiserFS on it. Execute a new config leaving that slot open for a new 10T drive I have coming. In this state the parity will rebuild with the remaining 6 drives. Once the new 10T drive arrives I will pre-clear the drive making sure its healthy then add it to the array in the same slot as the old 3T drive was in and re-start the array, which will again cause a parity rebuild. Next I will use rsync to copy all the files from the old 3T RiserFS drive to the new 10T XFS drive. I am fortunate as I only had 2 RiserFS drives left in my system all the rest I have been using XFS ever since it was available. Any thoughts on this process? I have taken a look a the documentation (not all of it mind you that is a lot of reading) but since I have already been through this process I could do it one more time.
  4. Well I get to answer my very own question. I went to dinner and came back to address this problem further. I ran across an article were an NTFS disk was recovered with ddrescue but the person could not mount it. I then ran across this article Guide to Using DDRescue to Recover Data I read this article from beginning to end looking for a clue as to what i might be doing wrong. There is a section called "Working with image files containing multiple partitions" After reading this section, I decided to follow the steps in this section to see what I would find out. I moved into the recovery folder on disk1 cd /mnt/disk1/recovery/ Next I executed this command parted disk2.img The server responded with: GNU Parted 3.3 Using /mnt/disk1/recovery/disk2.img Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) The next command in Parted was: unit Parted responded with: Unit? [compact]? The next command in Parted was: B Parted responded with: (parted) Next command to Parted is: print Parted responded with this information Model: (file) Disk /mnt/disk1/recovery/disk2.img: 3000592982016B Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 32768B 3000592965119B 3000592932352B reiserfs (parted) at this pont i only see one partition but having read this article I remembered that this has an offset. See the start is at 32768. but my mount command did not have an offset. just a little further down the article was the command I hoped I was looking for: mount -o loop,ro,offset=32768 /mnt/disk1/recovery/disk2.img /mnt/disks/drive2 Note with this mount command I am mounting it as ReadOnly (ro) and offering the start offset of 32768 After an agnizing minute as the drive1 spun up and started mounting the image I was greeted with the following in dmesg: [32209.362880] REISERFS (device loop4): found reiserfs format "3.6" with standard journal [32209.362890] REISERFS (device loop4): using ordered data mode [32209.362890] reiserfs: using flush barriers [32209.363866] REISERFS (device loop4): journal params: device loop4, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 [32209.363998] REISERFS (device loop4): checking transaction log (loop4) [32209.397773] REISERFS (device loop4): Using r5 hash to sort names I thought to myself, did it mount? I went to /mnt/disks/drive2 and viola, I found my files. Yea baby I am back in business! A couple of things that made this attempt successful. I set Parted to byte mode (not kilobyte) which gave me the precise location of the offset. setting the image to read only (ro) protects the image from being written to. but the key to making this successful was the offset=32768. The mount command was able to locate the correct area to read and the mount completed successfully. Thanks to @SpaceInvaderOne I was following his video on how to setup Krusader on my friends server. That video saved me a whole lot of headache. This server is being setup for a friend and I am doing a lot of things I did not do on my server. One of those was to setup short cuts in Krusader. Thses short cuts made it easy as pie to copy my files from my friends server over to my server. After the recovery is finished its time to renovate my server and make it better.
  5. I have a question I am struggeling with. I had a 3T drive experience errors, and at the same time I had a failed parity drive. I removed both drives. I replaced the parity drive with a 10T drive. I did a new config so that my array can rebuild the parity of the existing data. Now I had the bad parity drive and Drive2. Drive 2 I ran a ddrescue successfully recovering 99.9 percent of the drive. Its called drive2.img Now I want to mount this image and copy the data off the image. How can I do this. I have been googling for a couple of hours but I am not getting anyware fast. I read briefly that the UD plugin can mount these images, but I have not found out how yet. Can someone help me with a solution? I currently have the image on another unraid server it is sitting in /mnt/disk1/recovery. I have attempted to use this command: mount -o loop /mnt/disk1/recovery/disk2.img /mnt/disks/drive2 which resulted in this error mount: /mnt/disks/drive2: wrong fs type, bad option, bad superblock on /dev/loop4, missing codepage or helper program, or other error. i have also ran this command parted disk2.img print Which resulted in this message: Model: (file) Disk /mnt/disk1/recovery/disk2.img: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 32.8kB 3001GB 3001GB reiserfs I have also run this command as well blkid disk2.img Which resulted in this message: disk2.img: PTUUID="d5a8d2fd-b918-4458-b8ff-bb7f65e0b171" PTTYPE="gpt" I have also tried this command: mount -t reiserfs -o loop /mnt/disk1/recovery/disk2.img /mnt/disks/drive2 Which resulted in this error: mount: /mnt/disks/drive2: wrong fs type, bad option, bad superblock on /dev/loop4, missing codepage or helper program, or other error. I have looked in dmesg to find any clues and found this message. [ 2198.619991] REISERFS warning (device loop4): sh-2021 reiserfs_fill_super: can not find reiserfs on loop4 [ 8824.536014] REISERFS warning (device loop4): sh-2021 reiserfs_fill_super: can not find reiserfs on loop4 At this point i am not sure if the file system is corrupt or I am just entering the commands incorrectly. any help is most appreaciated.
  6. I am updating this post to show the full process of recovering a drive. I used my friends unraid server. I installed NerdPack Gui from community apps, then I installed ddrescue. I shut down the server and installed the defective drive2 for recovery. Once the system was rebooted I started a Unraid terminal session and executed this command root@Risa:/mnt/disk1# ddrescue -d /dev/sdf /mnt/disk1/recovery/disk2.img /mnt/disk1/recovery/disk2.log This was my first pass,. and resulted in this: GNU ddrescue 1.23 Press Ctrl-C to interrupt ipos: 2931 GB, non-trimmed: 0 B, current rate: 0 B/s opos: 2931 GB, non-scraped: 0 B, average rate: 100 MB/s non-tried: 0 B, bad-sector: 32768 B, error rate: 256 B/s rescued: 3000 GB, bad areas: 4, run time: 8h 16m 52s pct rescued: 99.99%, read errors: 67, remaining time: 0s time since last successful read: 1m 4s Finished Next I ran this command. root@Risa:/mnt/disk1# ddrescue -d -r 3 /dev/sdf /mnt/disk1/recovery/disk2.img /mnt/disk1/recovery/disk2.log You will notice teh -r 3 option. this tells ddresuce to retry all bad areas 3 times. Notice allso I am using the same log from the first time. This log tells ddrescue where the bad sectors are and to retry them. GNU ddrescue 1.23 Press Ctrl-C to interrupt Initial status (read from mapfile) rescued: 3000 GB, tried: 32768 B, bad-sector: 32768 B, bad areas: 4 Current status ipos: 2931 GB, non-trimmed: 0 B, current rate: 0 B/s opos: 2931 GB, non-scraped: 0 B, average rate: 7 B/s non-tried: 0 B, bad-sector: 28672 B, error rate: 170 B/s rescued: 3000 GB, bad areas: 5, run time: 8m 41s pct rescued: 99.99%, read errors: 171, remaining time: n/a time since last successful read: 3m 56s Finished If you will notice after the 2nd command, it only spent 8 minutes retrying bad sectors. Since this did not recover any data, I will leave it as it is, mount the resulting image "disk2.img" and copy the data from that image to the new drive on my server once its in the array. Thanks for all your help.
  7. @fabianonline As of 6.0.1 Agent 2 is available: https://www.zabbix.com/download_agents
  8. Well this is what the otherside looks like 🙂 Everything is going smoothly so far. What I did when I executed a new config I used the option to keep all drive assignments. This helped grately in putting all the drives in there original slots except for Drive 2. The Drive 2 slot is empty just like I want and when the new 10T drive comes in I will place it in there. My thoughts for the failing Drive 2, I would use ddRescue to create a rescue image of the drive. Then I could open the image and copy the files out of there, as I don't have another 3T drive I can use for this process. I do have friends unraid server I am building for her that I could use for the recovery and place the image on the array. Thoughts on this process?
  9. Great, I have backed up my flash drive, I have created a text file with ls * -Ralh > /boot/disk2.txt which created a disk2.txt which I also have on my local machine. This is a listing of all the files on the drive before I pull it.. Wish I had CRCs of each file but I don't. So I am not sure which files will be corrupted. Wish me luck and see you on the other side.
  10. Thanks for all your responses. so the plan of action would be ... Pull the failing disk 2 Pull the bad Parity drive. Do a new config build an array with the new 10T drive and the remeainng drives. recover as much data from Drive 2 add a new 10T drive (yes the minimum drive will now be 10T) to take the place of the old 3T drive copy recovered data back to the array. Would there be any other suggestions? If I do a new config, what would happen with my VMs and Dockers, plugins and how everything is setup?
  11. @trurl Now this does not make any sense. so let me ask you a question. in unraid when you have a parity protected array, the parity must be the largest drive in the array. Correct? Going on that assumption, I can not use a 3T cloaned disk to a 10T drive because (and my apologies because you can see this in the original posting of my diagnostics) because the drive was removed from the system, My parity drive is only 3T. Its serial number is W1F12655 My pre-clear has finished. my 10T drive did 3 pre-clear cycles with no errors. I have re-installed the failing 3T parity drive in exactly the same slot it came out of and re-ran the diagnostics. Please advise. Thank you for the time you have spent with me. It is appreaciated. davyjones-diagnostics-20220305-2322.zip
  12. @JorgeB The method you stated above is only if the original parity drive is still good, right? Right now I am waiting on my new 10T to complete a pre-clear which will take a couple of days.
  13. No problem. My 10T drive is still in pre-clear. when it finishes I will put the 3T parity back in the system. It will take a few days.
  14. @JorgeB So this is the Chicken before the egg scenario. So If I clone the drive I will loose the whole array because at that point two drives will have now "failed". Sick with me here. The reason I say this is If I choose to clone this drive to a larger 10T drive the array will not accept the drive because the partition is not the full capacity of the drive. thus the array is now down. This looks to me the only solution is to rebuild the parity just as it is then replace the failing drive. Would this be correct? or would I have to expand the partition on the new drive to occupy the entire disk. THEN unraid will be sort of happy. As I have not completely read your article on ddrescue I might have missed a step. I don't have space on the remaining good drives, they are all full.
  15. So I believe I will go the ddrescue route. cloning the smaller drive to the bigger one is not an issue, after the clone is done will the drive show up as a 3T drive or a 10T drive in the array? I will have to locate another drive to put in as a temporary because I only have one 10T drive and that has to be the parity drive.
  16. @trurl yes guilty as charged. I have since corrected this issue. but its like closing the door after the horses have already left the barn. Yes this is also the case. Until recently it has been a money issue. I now have started to have a little discretionary income which is why it went so long without addressing it. It was only after the Parity drive was disabled that I pushed for the replacement drive that is now testing. My wife has promised me a second 10T drive next payday. She knows how important this system is to the family. No this was not a connection issue, I am certain of that. This system has been rock solid for 7 years. Which means the 3T drives are the original drives in this system. It has been in continuous use for the 7 years without problems. I believe they are 3T green drives if I remember correctly. The cables I use all have locking tabs on them and I am using the 5x3 hotswap bay. I have 3 of these bays in an mid tower case. This was back when they made mid-tower cases with the ability to put that many drives in one case.
  17. This is an old topic, and I should not do this. but ... Speaking from experience. On the subject of two parity disks. It is good practice to have as much redundancy as you can afford in a system. I currently have an unraid server that is sick now. I have one disk with read errors and a failed parity drive. Having that second parity drive might have avoided me the head ache of having to use dd-rescue to recover the failing drive. This means the media server, which does way more than just serving media is down while I recover that drive. This means the wife is mad at me because the internet and wifi are down. and that is because I put my firewall on unraid. maybe not a good decision but its done now and I have to deal with the wife's disapproval. I really believe its all how valuable the Unraid server is. If its throwaway data then maybe 1 parity drive is sufficient. If it does more like my system, dual parity drives is just added insurance.
  18. Thank you very much for your quick reply. So what you are saying ... before I install the new parity drive I need to address the issue with the read errors first? Your article using DDRescue was excellent by the way. Right now as it stands, the system is up and running without parity and with Drive 2 failing with read errors.
  19. Good morning everyone. I am looking for advice. I have been using unraid for the past 7 years. I love the product. Last week I noticed that unraid had disabled my parity drive. I purchased a new 10TB Ironwolf drive which arrived the next day. Its currently in the pre-clear stage before I install it. Last night a parity check ran which was unexpected as the former parity drive is actually not in the system. I have the server run parity checks every month on the 1st. This has worked out for me perfectly. I have everything on my media server. My internet router (PFSense VM) as well as Plex my media back end. So far I am not experienceing any probelmes but I do have a concern. After the parity read ran last night Unraid reported: Last check completed on Tue 01 Mar 2022 08:48:10 AM PST (yesterday) Finding 384 errors Duration: 8 hours, 48 minutes, 9 seconds. Average speed: 94.7 MB/sec Next check scheduled on Fri 01 Apr 2022 12:00:00 AM PDT Due in: 29 days, 16 hours, 56 minutes so ... for the size of array I felt that was not too bad. but before I commision the new parity drive into service, I want to make sure that the system is ok and the Parity will not be incorrect when its re-calculated. Please find enclosed my Dignostics file. Thank you for your help. Sincerley, SidebandSamurai davyjones-diagnostics-20220302-0556.zip
  20. I found this item on Amazon: https://www.amazon.com/gp/product/B08MBCX8RW/ref=ox_sc_act_title_1?smid=A1BKCYQGK0QCS0&psc=1 WD Easystore 14TB External USB 3.0 Hard Drive - Black WDBAMA0140HBK-NESN for $269.00 on US Amazon. NewEgg was priced at $337.95. BestBuy has them for $259.99. Since I am going to replace my parity, I will buy only one of these and upgrade the rest of my drives with 8TB drives for now.
  21. I had a real scare last night. Right now I am using Internet off of my cell phone. Internet on my main network is down. I noticed that my plex container needed updating. without thinking I opted to update and answered yes to the question to "Are you sure" question. The Update process proceeded, without checking to see if a network connection can be established with the update server. It then proceeded to pull 0 bytes, then went through the upgrade process. Including removing the docker container as it does in its clean up process. I was really scared that there was the possibility that the working container was destroyed. Fortunately that did not happen, and the only thing that did happen was the Plex icon was deleted. Amazingly the container is still working. I would like to see the docker update process check for a valid internet connection before proceeding with the update. A check should also be made to see if greater than 0 bytes was pulled before proceeding with the upgrade routines. If 0 bytes were pulled, the upgrade aborts with "no data pulled" error. This can be reproduceable by allowing the docker subsystem to check for updates and flag the docker container as ready for update, then while the server is still connected to the switch some how block internet access to the unraid server and then execute the update with no access to internet. I did not provide logs but if you need them let me know.
  22. You can not simply change the size of the Docker Image in the webUI back to 25g once its 75g, the size of the file does not shrink to the smaller value when you press the "Save" button.. So in the Web interface you have 25G but in reality, the file is still 75G in size. SpaceInvaderOne has an excellent YouTube video on Docker principles and setup. Take a look at this video and let me know how you get along. I see that you are pretty new to UnRaid, welcome. You might want to subscribe to SpaceInvaderOne's Channel, he has excellent information about unraid and he provides succinct instructions and good practices for anything you might do with Unraid. Good Luck!