• Posts

  • Joined

  • Last visited

Everything posted by wsume99

  1. Oh, I won't reuse the super.dat file. I think it's best if I just start with a clean install and manually set the drive assignments. I have repurposed a 3TB parity drive as a data drive and since I'm not certain about the backup copy of my flash drive I really have no other choice. I am pretty certain I can get the assignments correct because of how I physically installed the drives in my case. Once I have the array running I'll copy over my scripts and modify the go file. The rest is just a lot of work with the old flash files for reference as well as the contents of each drive as you suggested for shares. Here's a scary question... Since I killed the power in the middle of the delete operation and the flash drive was totally wiped I'm assuming that at least something somewhere on one of my array drives or app drive was also hit. Is there a way to check for partially deleted flies after I am back up and running.
  2. Yep, I was just reading a thread about recovering from a failed flash drive. I think you also posted in that thread as well. (Thanks for your help here btw) I have my drives arranged in my case in a certain order so I can get the assignments set correctly. I made some adjustments to my go file to run some custom scripts. I can get those from my old flash drive. Is there a specific file(s) that includes the share settings? Even if I cannot reuse it just being able to reference it would be helpful. Then it's just work to setup all my apps again because I don't have a recent copy of those. I need to read up on recovering docker containers. Assuming I stopped the erasing before it cleared my app drive, which I think is a safe assumption, then all of them should still be there. Not sure what needs to be on the flash drive to interface with the apps that are on the drive.
  3. I was on a laptop upstairs. I'd say it went about 2 minutes before I ran downstairs and held down the power button and killed the server. Hopefully most of the stuff on the drives was untouched. I was in a hurry to catch a flight so I won't be able to assess the situation until I get back home in a few days. Unraid is my only linux machine and it's just a basic media server running a few apps for me so I hardly ever have to touch it let alone mess around in the cli. Looks like I'll be getting a little refresher when I get home. I did find an old backup flash drive but I think it was from when I was running v5. I think I have a newer copy on a non-array drive that I used for apps.
  4. Major faceplam moment just happened: I confirmed that the AFP share was not mounted on the mac. Changed Export to No in the Share settings Opened a telnet to the machine as a root user Navigated to /mnt/user/TimeMachine and ran ls -lah The .appleDS was still present as well as a ./ and ../ directory I then executed a rm -rf /* command thinking it was limited to the path I had already navigated to but nope. Flash drive is empty now. Hard lesson to learn. Well that's certainly one way to delete a user share. Now researching how to restore an active array from nothing 😬
  5. I'm running 6.5.3. I have an AFP share that I created specifically to store time machine backups from a MBP. I no longer want to use time machine backups and want to delete the share. I cannot remove the share until it's empty. I deleted the contents of the share from a windows PC. I get nothing from a ls query via CLI. However when I try to delete the share using rm -rf /mnt/user/TimeMachine I get an error message stating the share is not empty. Something about an .appleDS file in the share. This was done again after I executed a rm -rf *.* in the /mmt/user/TimeMachine share from the CLI but the same error occurred. What am I missing? How do I empty the share so I can delete it? Any help would be appreciated.
  6. Makes sense. Problem is this is my wife's laptop and I am setting up a network backup because she always forgot to connect her portable HDD. She uses this computer for photography so having it backed up is fairly important yet she never did it. This is the best I could come up without taking over the responsibility to do it myself. If her system crashes she will just have to wait for however long it takes or more likely however long I can stand the complaining. It finished the first backup last night and I downloaded an application to change the TM backup schedule. Every hour was a bit too frequent for me so I have it going once per day now.
  7. Yeah I did read that somewhere, although I have been told that if your boot drive crashes you can do a fresh install first and then restore from the network TM backup.
  8. I just updated the plugin to the 9.23 version and I still get the same error Sep 23 08:26:51 Tower emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update preclear.disk.plg Sep 23 08:26:51 Tower root: plugin: creating: /boot/config/plugins/preclear.disk/preclear.disk-2017.09.23.txz - downloading from URL Sep 23 08:26:51 Tower root: plugin: creating: /boot/config/plugins/preclear.disk/preclear.disk-2017.09.23.md5 - downloading from URL Sep 23 08:26:51 Tower root: plugin: skipping: /boot/readvz already exists Sep 23 08:26:51 Tower root: plugin: setting: /boot/readvz - mode to 755 Sep 23 08:26:51 Tower root: plugin: running: anonymous Sep 23 08:26:55 Tower root: plugin: running: anonymous Sep 23 08:26:55 Tower rc.diskinfo[14054]: killing daemon with PID [10972] Sep 23 08:26:56 Tower rc.diskinfo[14070]: process started. To terminate it, type: rc.diskinfo --quit Sep 23 08:26:56 Tower rc.diskinfo[14073]: PHP Warning: strpos(): Empty needle in /etc/rc.d/rc.diskinfo on line 341 Sep 23 08:26:56 Tower rc.diskinfo[14073]: PHP Warning: strpos(): Empty needle in /etc/rc.d/rc.diskinfo on line 341 Sep 23 08:26:56 Tower rc.diskinfo[14073]: PHP Warning: strpos(): Empty needle in /etc/rc.d/rc.diskinfo on line 341 If I uninstall the plugin the entries go away
  9. I am officially a moron. The export setting was the trick. I don't recall seeing that option when I setup the share but I must have just overlooked it. Thanks so much for the help!
  10. I followed what was recommended in other form posts for setting up a TM share. Sent from my Nexus 5X using Tapatalk
  11. I enabled AFP in settings and then set the AFP to export that share. Sent from my Nexus 5X using Tapatalk
  12. FYI, here is the guide I followed in attempting to setup the TM backup
  13. When I look in finder I see an UNRAID AFP server and it includes a single share that I named TimeMachine. I can open the share and see the files that were copied to it from what I did above. I am assuming it is mounted yet I cannot see it when I try to select the drive to use for my TM backup.
  14. First off I have zero experience with Mac so I need some major hand holding please. I am trying to setup a TM backup from my wife's MBP to my unraid server. I have a share setup with a single disk with AFP enabled and export set to yes. This share is a single 3TB disk and her MBP has a 1TB disk. I followed online guides on how to setup the share to a network location. It involved the following steps: 1) Change system preferences to allow unsupported volumes to be displayed in TM 2) creating a sparsebundle locally (note I made the size 2950GB) 3) copying it over the the network drive 4) deleting the local copy of the sparsebundle 5) go into TM and setup the backup by selecting the network share from the list of disks Problem is that the network share is not showing up in the list of available disks. All I see is the external drive she was using and an option to add a time capsule drive. I have been searching online for a few hours and tried several things but nothing I do makes the drive show up. I did find one tip to force the TM destination using a sudo command via terminal however I keep getting an error 45 when I try this. Hopefully someone can help me or at least point me in the right direction. Appreciate any input.
  15. I will preface everything by saying I am not running the latest version of the plugin. I thought I updated it but before I started my latest preclear but I didn't. I am running the 2017.03.31 version. I have 2 8TB WD Reds (WD80EFAX) that I shucked from external enclosures. They are both connected to a 2 port pci-e SATA expansion card. I launched a 3 cycle preclear on both drives within about 1 minute of each other. I checked them today and the preclear status messages are frozen (see image below) on sdg. The read and write operations are both still increasing at about the same pace. Is it safe to assume that the preclear operation is still proceeding as normal on sdg and the status updates are just frozen? Everything on sdh appears to be going as it normally would. I am wondering if I should kill the preclears and update the script or just keep going? Update: the status updates have now frozen for sdh but it continues to accumulate reads. Is the preclear still running and the status updates are just frozen?
  16. I found the FAQ and it finally clicked for me. Files are now being found as they should be. Thanks a TON.
  17. Just upgraded to v6 and also switch from sab/SB to nzbget and sonarr. I am getting a "No files found are eligible for import in..." error. on newly downloaded files. I verified the path and the permissions for the download location based on what I have read in other posts. If I try to manually import the files I get "No video files were found in the selected folder." There are mkv files in the folders and their size does fall within the quality limits so I am at a loss. Any suggestions?
  18. Yep, already there. I looked at them a while back but opted for the unmanaged cisco switch that I currently have. Poor decision on my part but at the time I was not even considering home automation and was not really concerned about network segregation. I am now trying to decide between the edgeswitch lite, edgeswitch, or the unifi switch. I see the differences between the 2 edgeswitches - lite does not have PoE - but I am trying to figure out what the difference is between the edgeswitch and the unifi switch. Best I can tell the unifi switch uses their software to configure/manage the switch whereas the standard edgeswitch uses a the traditional CLI or device hosted GUI. Both have basically the same functionality and are essentially the same price on Amazon. The only real con to the unif switch I can see is that it requires the unifi s/w be hosted somewhere on the network but I will have a cloud key on my network already to control my APs so it could also control the unfi switch. The edgeswitch would be able to operate without the unifi s/w so if I ever decided to ditch the unifi APs I could still run an edgeswitch without needing to host the unifi s/w. The edgeswitch lite is about 1/2 the cost of the other 2 switches and I already have invested in a 6 port POE injector so I don't think having a POE switch is worth the extra $190. Any thoughts?
  19. Lol, I have some ubiquiti hardware and was just reading some info on their site about VLANs. I currently have a 5 port EdgeRouter X in my network. I have a couple of the Unifi-AC-PRO APs and a cloud key that I plan to hopefully get up and running tonight. I have been waiting for about 6-7 weeks to get internet service at my house and it was supposed to be installed this morning. So assuming it was successful I will be getting them hooked up tonight. I didn't want to try setting them up with no internet access. The switch I have is a Cisco SG102-24-NA switch. I am currently trying to figure out if it supports VLANs. edit- Yeah the cisco switch is unmanaged so I if I wanted to do VLANs I would need to by a switch that supported them or buy more unmanaged switches. I actually have a couple d-link 8-port switches lying around but I think that having separate switches would kind of defeat the whole purpose of having VLANs.
  20. I recently moved into a new house and I am looking for some resources to help me learn about upgrading my home network to accommodate everything I want to do. I am specifically looking for information about network segregation and how to properly isolate things. I currently have a very basic setup - router, switch, and wireless AP - with all devices in my home on the same network. I am planning to dive into home automation and I am pretty sure I want to isolate all that stuff onto it's own network. I also want to create a guest wifi network for any visitors. I am not afraid to do research and I want to learn. I did some quick searches and the information is all over the map from many different sources. All I really want is for someone to point me to a good source where I can read and learn. I can provide more details if anyone would like to know but I don't want to suck up people's time with a bunch more noobish questions, unless you are game.
  21. This chance of encountering an UBE is correct if my hypothesis is right, that UBE's can be modelled as a Poisson-Process with rate 10^-14. I think it is pretty safe to say that not all of those assumptions are true. If it were then there would be a METRIC CRAP TON of users complaining about parity errors during their monthly parity checks. Perhaps I am incorrect. It would be nice if one of the smarter people would weigh in. Specifically I am wondering what happens when you encounter a URE during a parity check. I am working from the premise that a parity check reads the data disks, computes the expected parity value and compares that to the value that is currently stored on the parity drive. So what happens if there is a URE and the current parity bit cannot be read? What happens if there is a URE on one of the data disks and parity cannot be computed? I am assuming that the system would log some kind of error. Perhaps I am incorrect.
  22. Let me start off by saying that I'm not an expert when it comes to servers (but I've learned a lot from building and using unraid) so I have some questions that I'd like to get some input from the community on. I am asking these questions because I got involved in a project at work where we're trying to model the reliability/availability of a server that a vendor is building. The server uses a bunch of RAID-5 arrays for storage and I have no first hand experience with RAID-5. My biggest concern is that most of the storage arrays are built using 4 x 4TB HDDs and I'm concerned about encountering a URE during a rebuild after a failed disk. The data contained on the arrays is mission essential and the vendor claims that they do have on-site backup that is no more than 24 hours old. I'm pressing the vendor for more info on that topic but at this point I just need to better understand how a RAID-5 array behaves so I can properly model it's availability. 1) When one of the drives in the array experiences a failure the array is still accessible and can provide data (although at a slower throughput), correct? 2) When an array is in the degraded state (one drive failed) what happens if a URE is encountered? I'm under the impression that the specific data/file being accessed could be unavailable. Is that correct? If this happens does the array go offline or does it continue to operate and you just have problems accessing the data that is on the disk where URE is occurring? 3) After a disk fails it must replaced with a new disk and the data will be rebuilt to it using the remaining data & parity info. I assume that the array is still functional at this point but with lower throughput, correct? 4) If a URE is encountered during a disk rebuild what happens? I think that the rebuild operation simply fails and the array remains functional but in the degraded state (#1 above). Is that correct? 5) Is there anything other than the URE rate (i.e 1E15, 1E16, etc) and drive size that influences the likelihood of encountering a URE during a rebuild operation? I had a long discussion about UREs with the vendor yesterday and they were claiming that the cache on the RAID controller can reduce the likelihood of encountering a URE. I think that is complete BS but I suppose I could be wrong. Please excuse my ignorance and feel free to educate me as necessary.
  23. Yep. Sounds like the same syndrome. I also had one that worked after about a year, but the second one that was stored longer failed. But to be clear, were your drives mounted in your server? If they were just precleared and stored elsewhere that a would be evidence that the failure I experienced was not related to storing the drive the way I did. They were stored in my server but not connected to power. My server is located in my basement which stays pretty cool (~18°C) year round. The temps inside the server case are probably in the low 20s.
  24. I just had this exact same thing happen to me as well. Just before the flood I bought 2 x 2TB Hitachi 5k3000 HDDs for ~$55 each (with rebates). I ran three preclear cycles on both drives and they had no errors. One sat unconnected for about a year before I added it to the array and it has worked just fine. The second drive sat unconected for 2.5 years and then I moved it to my desktop for use there. It died immediately. It had less than 100 hours of power on time. I RMA'd the drive and it was replaced.