• Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About MacDaddy

  • Rank


  • Gender
  1. Any possibility to add sshpass? In conjunction with user.scripts I'm hoping to implement something like : #!/bin/bash #argumentDescription=Enter password and box name (mypass pihole) sshpass -p $1 ssh pi@$2.rmac "sudo dd bs=4M if=/dev/mmcblk0 status=progress | gzip -1 - " | dd of=/mnt/user/Backups/$2/$(date +%Y%m%d\_%H%M%S)\_$box.gz I run 4 different Raspberry Pi boxes. They run for a good long time, but I've just had the third SD card fail. I would like to keep an image where I can recover quickly with minimal pain.
  2. Thanks for the info. The 5400 drives should in theory be quieter than their 7200 counterparts. Good points on the airflow. Noise and the airflow will go hand in hand. I’ll look in to active cooling on the CPUs. Sent from my iPhone using Tapatalk
  3. I have a Supermicro X9DRi-LN4+/X9DR3-LN4+ with dual Xeon® CPU E5-2630L v2 based server for my unRaid build. It is a surplus server in a Supermicro CSE-835TQ-R920B case. In my prior residence, I had the luxury of converting one of the closets to house all my equipment. It was designed for power/ventilation/noise. I'm now in a place where I can't modify any rooms and the only location to house the equipment is a closet in the master bedroom. Needless to say, the server sounds like a hoover vacuum with asthma on steroids. It has served me well and I am thinking to transfer the M/B and 5xWD4
  4. Thanks for your response. I had a feeling it would go that way. This is my first encounter with corruption. When I complete the XFS repair it will prune data (according to the dry run output). Is that data lost for good or will unRaid recognize it and let parity reconstruct? Sent from my iPhone using Tapatalk
  5. I'm currently using a docker MakeMKV to write cloned DVD structures in to a MKV container. I've noticed that a share that I'm using for the output keeps dropping. I can reboot and the array will start with all drives green and the share is restored. A snippet from the log is attached. I can start in maintenance mode and dry run xfs_repair on all the hard drives. All are clean except md2. Is it better to xfs_repair the md2 drive or replace with new drive and let it rebuild? Note-while parity shows valid, it has been more than 700 days since last check. Oct 13 18:36:
  6. Some people should learn to search the forum before posting redundant issues. My apologies. Sent from my iPhone using Tapatalk
  7. I’m resurrecting my unRaid box. It shows the latest update v6.5.3 is available. When I initiate the upgrade it throws an invalid URL/ server error message. Sorry for the pic, I’m on direct terminal. Is it possible that amazon is down? Or maybe I need an address intermediate step? Sent from my iPhone using Tapatalk
  8. First off, a big shout out to the entire unRaid community. It's funny how much more stuff beyond my media library was stored away. Being able to recover was a life saver. I sincerely appreciate all those who took the time to give me some actionable suggestions and thoughtful approaches. One thing I did when loading the drives was to put an Avery sticker with "row column" indicator (i.e. A1 was in the uppermost left slot, B1 was the uppermost right slot). I knew A1 was parity and had high (but not absolute) confidence A2 was disk1, A3 was disk2, etc.. I used the plugin to docum
  9. Thanks guys - I truly appreciate all the advice and recipes. It is helping me move forward. Here is where I'm at the moment: - I found a new self-service kiosk at my local HEB called DryBox. The basic idea is to pull a vacuum and heat at a lower point to pull out moisture. Targeted towards cell phones but a drive fits. I ran the failed drive through this process. - I was able to mount the failed drive and get some files off before it degraded. I let it cool down, rebooted and got another set of files off. Repeat. Got a lesser set of files. Repeat. Fatal I/O error. -
  10. Apologies. I was too quick to reply and probably didn't spell it out to the degree I can ask for good advice. Here is what I intend to try: 1) I have five existing flood damaged disks. The parity disk appears to be operational. Three of the data disks appear to operate well enough for me to rsync contents to a fresh disk. One data disk had I/O errors and is undergoing a second pass at the drying process. 2) Once I get an additional controller card installed, I will have sufficient ports to attach the four operational flood damaged disks. 3) I'll use's advice o
  11. Thanks. That's good advice. So far I've had 7 individual files that didn't copy cleanly. I've been able to retry and get three to copy. I've been able to reboot and get two more to copy. So far only one has permanent I/O errors. I like the idea of pulling the files off individually from a emulated drive. I need to do this "locally" if possible with an unassigned device. I'm assuming that there is a mount point in Linux that represents the emulated disk (is it the /mnt/user)? Or does a /mnt/diskX show up for emulated disk?
  12. Thanks much. I have a fresh disk precleared. I'll have to make a trip to the store for a controller to add enough ports to the workbench machine to drive all the disks,
  13. Any ideas on assignment? A perfect world would let me make each diskX assignment to the correct /sdX mount. If I don't luck out with my machine config showing up on the recovered drives then I'll be in uncharted territory (for me).
  14. I was expecting that. Given the wet disks went 24hrs during the transfer, I might try to get them back in a workbench and see if I can rebuild that way.
  15. I've got two data disks recovered. One disk was slow to power up so I stopped in mid boot. Then I noticed a small water drop on the breather hole. For that disk I'm starting the process all over again. I've got one more data disk and the parity remaining. I'm using the following rsync options with good results: rsync -avhW --no-compress --progress /mnt/disk1/ /mnt/disk2/ -a is for archive, which preserves ownership, permissions etc. -v is for verbose, so I can see what's happening (optional) -h is for human-readable, so the transfer rate and file sizes are easier to read