Jump to content

wsume99

Members
  • Posts

    531
  • Joined

  • Last visited

Everything posted by wsume99

  1. Oh, I won't reuse the super.dat file. I think it's best if I just start with a clean install and manually set the drive assignments. I have repurposed a 3TB parity drive as a data drive and since I'm not certain about the backup copy of my flash drive I really have no other choice. I am pretty certain I can get the assignments correct because of how I physically installed the drives in my case. Once I have the array running I'll copy over my scripts and modify the go file. The rest is just a lot of work with the old flash files for reference as well as the contents of each drive as you suggested for shares. Here's a scary question... Since I killed the power in the middle of the delete operation and the flash drive was totally wiped I'm assuming that at least something somewhere on one of my array drives or app drive was also hit. Is there a way to check for partially deleted flies after I am back up and running.
  2. Yep, I was just reading a thread about recovering from a failed flash drive. I think you also posted in that thread as well. (Thanks for your help here btw) I have my drives arranged in my case in a certain order so I can get the assignments set correctly. I made some adjustments to my go file to run some custom scripts. I can get those from my old flash drive. Is there a specific file(s) that includes the share settings? Even if I cannot reuse it just being able to reference it would be helpful. Then it's just work to setup all my apps again because I don't have a recent copy of those. I need to read up on recovering docker containers. Assuming I stopped the erasing before it cleared my app drive, which I think is a safe assumption, then all of them should still be there. Not sure what needs to be on the flash drive to interface with the apps that are on the drive.
  3. I was on a laptop upstairs. I'd say it went about 2 minutes before I ran downstairs and held down the power button and killed the server. Hopefully most of the stuff on the drives was untouched. I was in a hurry to catch a flight so I won't be able to assess the situation until I get back home in a few days. Unraid is my only linux machine and it's just a basic media server running a few apps for me so I hardly ever have to touch it let alone mess around in the cli. Looks like I'll be getting a little refresher when I get home. I did find an old backup flash drive but I think it was from when I was running v5. I think I have a newer copy on a non-array drive that I used for apps.
  4. Major faceplam moment just happened: I confirmed that the AFP share was not mounted on the mac. Changed Export to No in the Share settings Opened a telnet to the machine as a root user Navigated to /mnt/user/TimeMachine and ran ls -lah The .appleDS was still present as well as a ./ and ../ directory I then executed a rm -rf /* command thinking it was limited to the path I had already navigated to but nope. Flash drive is empty now. Hard lesson to learn. Well that's certainly one way to delete a user share. Now researching how to restore an active array from nothing 😬
  5. I'm running 6.5.3. I have an AFP share that I created specifically to store time machine backups from a MBP. I no longer want to use time machine backups and want to delete the share. I cannot remove the share until it's empty. I deleted the contents of the share from a windows PC. I get nothing from a ls query via CLI. However when I try to delete the share using rm -rf /mnt/user/TimeMachine I get an error message stating the share is not empty. Something about an .appleDS file in the share. This was done again after I executed a rm -rf *.* in the /mmt/user/TimeMachine share from the CLI but the same error occurred. What am I missing? How do I empty the share so I can delete it? Any help would be appreciated.
  6. Makes sense. Problem is this is my wife's laptop and I am setting up a network backup because she always forgot to connect her portable HDD. She uses this computer for photography so having it backed up is fairly important yet she never did it. This is the best I could come up without taking over the responsibility to do it myself. If her system crashes she will just have to wait for however long it takes or more likely however long I can stand the complaining. It finished the first backup last night and I downloaded an application to change the TM backup schedule. Every hour was a bit too frequent for me so I have it going once per day now.
  7. Yeah I did read that somewhere, although I have been told that if your boot drive crashes you can do a fresh install first and then restore from the network TM backup.
  8. I just updated the plugin to the 9.23 version and I still get the same error Sep 23 08:26:51 Tower emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update preclear.disk.plg Sep 23 08:26:51 Tower root: plugin: creating: /boot/config/plugins/preclear.disk/preclear.disk-2017.09.23.txz - downloading from URL https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/archive/preclear.disk-2017.09.23.txz Sep 23 08:26:51 Tower root: plugin: creating: /boot/config/plugins/preclear.disk/preclear.disk-2017.09.23.md5 - downloading from URL https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/archive/preclear.disk-2017.09.23.md5 Sep 23 08:26:51 Tower root: plugin: skipping: /boot/readvz already exists Sep 23 08:26:51 Tower root: plugin: setting: /boot/readvz - mode to 755 Sep 23 08:26:51 Tower root: plugin: running: anonymous Sep 23 08:26:55 Tower root: plugin: running: anonymous Sep 23 08:26:55 Tower rc.diskinfo[14054]: killing daemon with PID [10972] Sep 23 08:26:56 Tower rc.diskinfo[14070]: process started. To terminate it, type: rc.diskinfo --quit Sep 23 08:26:56 Tower rc.diskinfo[14073]: PHP Warning: strpos(): Empty needle in /etc/rc.d/rc.diskinfo on line 341 Sep 23 08:26:56 Tower rc.diskinfo[14073]: PHP Warning: strpos(): Empty needle in /etc/rc.d/rc.diskinfo on line 341 Sep 23 08:26:56 Tower rc.diskinfo[14073]: PHP Warning: strpos(): Empty needle in /etc/rc.d/rc.diskinfo on line 341 If I uninstall the plugin the entries go away
  9. I am officially a moron. The export setting was the trick. I don't recall seeing that option when I setup the share but I must have just overlooked it. Thanks so much for the help!
  10. I followed what was recommended in other form posts for setting up a TM share. Sent from my Nexus 5X using Tapatalk
  11. I enabled AFP in settings and then set the AFP to export that share. Sent from my Nexus 5X using Tapatalk
  12. FYI, here is the guide I followed in attempting to setup the TM backup
  13. When I look in finder I see an UNRAID AFP server and it includes a single share that I named TimeMachine. I can open the share and see the files that were copied to it from what I did above. I am assuming it is mounted yet I cannot see it when I try to select the drive to use for my TM backup.
  14. First off I have zero experience with Mac so I need some major hand holding please. I am trying to setup a TM backup from my wife's MBP to my unraid server. I have a share setup with a single disk with AFP enabled and export set to yes. This share is a single 3TB disk and her MBP has a 1TB disk. I followed online guides on how to setup the share to a network location. It involved the following steps: 1) Change system preferences to allow unsupported volumes to be displayed in TM 2) creating a sparsebundle locally (note I made the size 2950GB) 3) copying it over the the network drive 4) deleting the local copy of the sparsebundle 5) go into TM and setup the backup by selecting the network share from the list of disks Problem is that the network share is not showing up in the list of available disks. All I see is the external drive she was using and an option to add a time capsule drive. I have been searching online for a few hours and tried several things but nothing I do makes the drive show up. I did find one tip to force the TM destination using a sudo command via terminal however I keep getting an error 45 when I try this. Hopefully someone can help me or at least point me in the right direction. Appreciate any input.
  15. I will preface everything by saying I am not running the latest version of the plugin. I thought I updated it but before I started my latest preclear but I didn't. I am running the 2017.03.31 version. I have 2 8TB WD Reds (WD80EFAX) that I shucked from external enclosures. They are both connected to a 2 port pci-e SATA expansion card. I launched a 3 cycle preclear on both drives within about 1 minute of each other. I checked them today and the preclear status messages are frozen (see image below) on sdg. The read and write operations are both still increasing at about the same pace. Is it safe to assume that the preclear operation is still proceeding as normal on sdg and the status updates are just frozen? Everything on sdh appears to be going as it normally would. I am wondering if I should kill the preclears and update the script or just keep going? Update: the status updates have now frozen for sdh but it continues to accumulate reads. Is the preclear still running and the status updates are just frozen?
  16. I found the FAQ and it finally clicked for me. Files are now being found as they should be. Thanks a TON.
  17. Just upgraded to v6 and also switch from sab/SB to nzbget and sonarr. I am getting a "No files found are eligible for import in..." error. on newly downloaded files. I verified the path and the permissions for the download location based on what I have read in other posts. If I try to manually import the files I get "No video files were found in the selected folder." There are mkv files in the folders and their size does fall within the quality limits so I am at a loss. Any suggestions?
  18. @ Matrixvibe - Where did you get those bushings (one white & one black) for the cable management holes you drilled?
  19. You will be able to see the raw results of the SMART tests in your syslog. All you really need to do is check to see if you have any change in the pending or reallocated sectors count. If you do then you'll need to decide if you want to run more preclears or RMA the drive.
  20. With the basic version you are limited to three drives in the array but you can have an unlimited number of drives outside of the array. So since any drive you would be preclearing would not be assigned to the array you can do this without any problem. I have done this myself so I can confirm that it works just fine.
  21. Looking at what you provided I cannot tell if that is the pre or post SMART report. However the fact that your disk reports 25 power on hours leads me to believe that it is the post-read assessment. If the info you posted is in fact the SMART report from the post-read then your disk is good (no reallocated or pending sectors).
  22. Just a guess here but is it possible that you have that SATA port configured as IDE in the BIOS?
  23. Simple answer - No. But ... personally I keep my preclear reports so I can use them as a historical comparison so that when I run SMART reports on a disk I can go back and compare the results to when the disk was new.
  24. I had a WD20EARS that showed the same behavior that you are seeing. I RMA'd that drive. I suppose WD would say that technically there is nothing wrong with the drive because it is passing the SMART test but I opted the RMA it back to the vendor I purchased it from anyway. I did this because I just didn't feel comfortable adding the drive to my array. The replacement drive cleared without any problems.
×
×
  • Create New...