vinnybhaskar

Members
  • Posts

    19
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

vinnybhaskar's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yep, it does work with the disk being referenced instead of the user share. However, I was wondering what prevents the user share from working when either of them should suffice.
  2. Any specific reasons why this Docker images needs /mnt/cache/appdata instead of /mnt/user/appdata? A lot of other Docker images I use (even from LSIO) works well with /mnt/user/appdata. OK, should have been fixed in V6.2.4 of unraid. As an aside, you'll get much better performance from your docker containers if you can put the appdata on a cache disk. I don't see any mention of any fix in unRAID 6.2.4 for the above issue. Could you please explain?
  3. Well, I forgot to mention that I had seen that post and I do not need VM running. The Atom boards any how do not support virtualization.
  4. Hi everyone, I need a cheap SATA controller to add more ports to my low power NAS built around the Intel Atom processor. Currently, I have an onboard Marvell 88SE9230 controller which keeps dropping disks under heavy load. I have found two cards which are based on the Marvell 88SE9215 and the ASM1061 controllers. The Marvell card has 4 Ports and the ASM1061 has 2. I'm inclined towards buying the 4 Port Marvell, but afraid due to the ongoing issues with the onboard 9230 controller. What's your experience with these cards? Which is more reliable? Are there any other cards around the same price with a different controller? Marvell 9215 card: https://www.amazon.com/IO-Crest-Controller-Non-Raid-SI-PEX40064/dp/B00AZ9T3OU/ref=sr_1_3?ie=UTF8&qid=1482424123&sr=8-3&keywords=Sata+controller ASM1061 card: https://www.amazon.com/IO-Crest-Port-PCI-Express-SY-PEX40039/dp/B005B0A6ZS/ref=sr_1_5?ie=UTF8&qid=1482424123&sr=8-5&keywords=Sata+controller
  5. Finally, got the issue resolved by adding IP route inside of the container. In all my attempts earlier I was using the wrong gateway. Here's the correct route: ip route add 192.168.1.0/24 via 172.17.0.1 Dev eth0 The IP after "via" is the gateway and that needs to be the container gateway. Docker sets this automatically when creating a virtual Ethernet bridge.
  6. Thanks for the reply. Well, I forgot to add this. I'm using a custom docker image I created. Additionally, it's an OpenVPN client connecting to a VPN server.
  7. I've been struggling with OpenVPN routing for last couple of days. Hopefully someone can help me get this sorted. I have a Docker container with network set to bridge. At first, I'm able to reach this container from my local network (192.168.1.0/24). But, the moment I start OpenVPN and initiate a VPN connection the container is no longer reachable from the local network. However, the container remains accessible from the host unRAID OS. Which is what's confusing me. Before initiating the VPN connection the IP route inside of the container appears as following: bash-4.3# ip route default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2 Once the VPN gets connected, OpenVPN modifies the IP routes to: bash-4.3# ip route 0.0.0.0/1 via 10.108.21.1 dev tun0 default via 172.17.0.1 dev eth0 10.108.21.0/24 dev tun0 proto kernel scope link src 10.108.21.40 128.0.0.0/1 via 10.108.21.1 dev tun0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2 178.255.153.76 via 172.17.0.1 dev eth0 What routes should I be adding to allow any computer on my local network to communicate with the Docker container while the VPN is active? Or is there something else breaking this? Using UnRAID 6.2 RC4
  8. I did not get the chance to read this thread in it's entirety. Please ignore if this bug has already been reported. With RC4 "make_bootable_mac" script is rendering the flash drive un-bootable and corrupts the contents of the same. See the attached screenshots. The RC4 version of the script encounters an error and doesn't seem to complete all steps in comparison to the RC3 version. Also attached is the screenshot of the disk initialization error. This happens when re-inserting the flash drive on the Mac after the script has completed and ejected the flash. The issue appears to be because of a typo in the device identifier hdutil is trying to unmount. Should be "/dev/disk1s1" instead of "/dev/disk1s1s1".
  9. That makes perfect sense. I went ahead and pre-cleared the disks once and they came out good! Thanks!
  10. Reallocated sector count and Current pending sector are zero. Do I need to look at any other attributes as well? The disks were in use on a Mac and were almost full.
  11. I have examined their SMART attributes and find no issues. These disks have had data written to and read from them frequently. They were daily drivers. So I believe it would be safe to assume they are good drives. Secondly, even if I were to run 1 or 2 pass of pre-clear, that would not necessarily mean they would not fail on the third. I think pre-clearing for stress testing makes sense with new disks to rule out any DOAs (infant mortality).
  12. I’ve read a lot on pre-clearing disks prior to adding them to an array. I understand pre-clear does the following: 1. Clears the disks so that unRAID does not need to do the same 2. Stress tests disks to rule out bad disks Let’s park the idea of stress testing the disk. The disks I’m concerned about have been in service for a while and I know they are good. Coming back to clearing the disk. I believe this is only needed when adding disks to an existing array with a valid parity. Emphasis on "existing array with a valid parity". If I were to start a NEW array without a parity disk, unRAID just needs to format the data disks to one of the supported file systems and bring the array of online. I can then stop the array and add a parity drive which would trigger and parity build. Using the above process I can completely skip the clearing/pre-clearing process for NEW arrays. Am I correct?
  13. I was reading the post "Will server Power button gracefully shut down the server?" (http://lime-technology.com/forum/index.php?topic=6078) And was wondering whether it applies to 6.1.x or 6.2. Do these new versions support graceful shutdown (stopping the array and then shutting down the server) out of the box on press of the power button? I wasn't able to try out the procedure listed out in the post above since the command "cat /proc/acpi/event" errored (if that's a word) out. What's the correct path in case of 6.1.x or 6.2? Additionally, during a power outage when the UPS triggers UnRAID to shutdown, what happens to processes such Parity check, Parity re-build, Preclear, etc. Do these processes gracefully stop prior to the shutdown? Is there any process that can cause data corruption or other unwarranted issues when such a shutdown occurs?
  14. That's a good point. I should try that too. Would certainly make things simpler if disastor strikes. Thanks for your help guys. Loving the support here at unRAID forum!
  15. Would it be the "Zero only the MBR" option? See attached screenshot. Removing and preclearing an existing array member will void parity, so if you do that you will need to rebuild parity. Preclearing is not necessary to format a disk in unraid. Makes sense when you want to preserve the parity. However, in my case I'm going destructive since it's just an experiment. BTW, would your initial answer be valid even when starting out with a new install of unRAID on flash?