Jump to content

daddygrant

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by daddygrant

  1. On 2/9/2020 at 3:06 AM, johnnie.black said:

    This won't work, you can replace one of them with the NVMe device and only them remove the other, this can all be done with the pool online, except when stopping/starting the array, instructions for all those procedures are in the FAQ:

     

    Thanks... I figured it out after re-reading some of your previous posts on the matter.

     

    What I did. There is minor down time between stopping the array to assign the drive.

    1. Unassign old drive 1 from the pool

    2. Let cache balance

    3.Assigned the new drive into the pool.

    4. Let cache balance

    5. Unassign old drive 2 from the pool

    6. Let cache balance

    8. Drink beer and enjoy my new larger single drive.

     

  2. Hey everyone,

     

    I am trying to add an Intel NVMe card to my Unraid setup but it's not showing up to the system.  I know the card works so I figure there is a configuration issue I need to work around.   I have an NVMe m.2 running perfectly fine in the system as well.

     

    This is what I've already  tried,

    1. disabling/enabling all virtualization features

    2. Reset the card

    3. Swap the card ( I have two :) )

    4. Move the card to a different slot. 

     

    Any pointers on where to go next?

     

    Uuan

  3. Hey,

     

    I'll like to go from my two 480GB SATA SSD cache to a single Intel Enterprise 1.2TB NVMe cache drive.

     

    Currently both SATA drives are in a cache pool.  I can have all three drives physically connected to facilitate the move.  So, I need a sanity check on the fastest method to get this down with minimal down time (I do have backups just in case).  Here is what I'm thinking.

     

    Method 1

    1.  Stop array

    2. Set Cache from 2 to 3

    3. Assign NVMe to slot 3

    4. Start array

    5. run a balance (Anything special needed?)

    6. Stop array

    7. Unassign the two SATA drives from cache setting

    8. Start array

     

    Method 2 (A friend's suggestion due to BTRFS magic )

    1.  Stop Array

    2. Remove both SATA from Cache assignment

    3. Assign NVMe to slot 1

    4. Start Array

     

    Thoughts?

     

  4. 7 hours ago, daddygrant said:

    Interesting thing.  I got it working last night from the phone without issue. Easy as pie.

     

    But, This morning I added a few more clients and none can connect including my phone that worked fine last night.  The clients say connected but that isn't reflected on the server and traffic is not passing.   Firewall ports and DDNS are good.

     

    Any thoughts?

     I found the problem.  Oddly enough,  the local endpoint information went blank. I re-entered the information and now I'm rocking with LAN access client profile.  The client profile for server only access is still not showing traffic.

  5. Interesting thing.  I got it working last night from the phone without issue. Easy as pie.

     

    But, This morning I added a few more clients and none can connect including my phone that worked fine last night.  The clients say connected but that isn't reflected on the server and traffic is not passing.   Firewall ports and DDNS are good.

     

    Any thoughts?

  6. On 1/25/2019 at 9:59 AM, binhex said:

    ok thats going to be a problem then as it will clash with the containers openvpn process, you need to disable that if you want to use this container.

    I turned off the QNAP OpenVPN service and everything is working now.

    Thank you.

  7. 21 hours ago, binhex said:

    ok thats going to be a problem then as it will clash with the containers openvpn process, you need to disable that if you want to use this container.

    OK.  Thank you for confirming my suspicions.  I'll disable it and use an openvpn docker for the same duties.

  8. I'm experiencing something very similar with my QNAP.  I moved from unraid and it was working then it was not.  I looked at /etc/daemon_mgr.conf but only found "DAEMON53 = openvpn, start, QNAP_QPKG=QVPN /usr/sbin/openvpn --config /etc/openvpn/server.conf --daemon ovpn-server" and not the  "stop" other have removed to solve the problem. My logs look good on the container but access is a no go. 

     

    I'm receiving the error:

     

    192.168.1.54 took too long to respond.

    Try:

    Checking the connection

    Checking the proxy and the firewall

    ERR_CONNECTION_TIMED_OUT

     

  9. Yep... Thanks.  Worked for me.

    1 hour ago, yellowcooln said:

     

     

    I did as well and what fixed it was doing https://IP:6237 even though your server might not be running HTTPS (like mine) it runs on HTTPS from the looks of it. 

     

    21 hours ago, tamito said:

     

    I got same issue today. I access thru https and problem solve.

     

  10. On 5/28/2018 at 6:37 PM, jonesy8485 said:

     

    I'm looking to swap my XFS cache out for a larger one. What if I add an SSD to the array and only allow the cache-only shares to move to the SSD drive using the share settings? I know it will still have to write parity, but will this save me any significant amount of time? Not looking forward to being down for two days like last time! 

     

    I have slots available for options if you all know of any, but my cache is XFS currently.

     

    I may be faster but if you have a parity disk it could slow you down..  I know Unraid warns that SSDs aren't supported in the array but it should work.

  11. 3 hours ago, david279 said:

    So with this method the old cache disk would just become a unassigned disk?

     

    Sent from my SM-G955U using Tapatalk

     

     

     

     

    The classic method involves stopping all VMs/Dockers

    Set shares to not use the cache. 

    Run the mover to migrate the data to the array.

    Swap the cache drive, change the selected shares to use the cache and then run the mover again.

    Finally re-enable the cache and run the mover.

    Enable dockers and VMs.

     

    For me it takes about 2 days. Mostly because of Plex.

  12. 16 hours ago, johnnie.black said:

    No, current cache needs to be btrfs like it says on the notes.

     

    Yes, unless something goes wrong, that's way it's always good to backup any important data.

     

    Yes.

     

     

     

    Thank you. Unfortunately I checked and my current cache disk is xfs.  I have however format the new disk as btrfs to future migrations.   I'm wonder if I can use MC to move the data and swap the cache.  Any other suggestions other than the legacy method?

  13. I currently have a 500GB cache drive that I want to replace with a 1.2TB drive.  Both drives are in the server and the 1.2TB one is unassigned.  In the past I used the mover to send data to the array then back to the new cache drive but that process took a long time.  I saw a new method on the FAQ that may work for me since both drives are in the server. 

     

     

    1. Does anyone have experience with this method and do I need to stop the Dockers or VMs? 
    2. Is it really that easy with no data loss? 
    3. Does the array continue to run during the process (Share/Dockers/VMs)?

     

    Stop - Select - Start  .. that is? Mind blown if it is.

    On 7/18/2016 at 4:46 AM, johnnie.black said:

    How do I replace/upgrade a cache pool disk?

     

     

    A few notes:

    -unRAID v6.4.1 or above required, upgrade first if still on an older release.

    -Always a good idea to backup anything important on the current cache in case something unexpected happens

    -This procedure assumes you have enough ports to have both the old and new devices connected at the same time, if not you can use this procedure instead.

    -Current cache disk filesystem must be BTRFS, you can’t directly replace/upgrade an XFS or ReiserFS disk.

    -On a multi device pool you can only replace/upgrade one device at a time.

    -You can directly replace/upgrade a single btrfs cache device but the cache needs to be defined as a pool, you can still have a single-device "pool" if the number of defined cache slots >= 2

    -You can't directly replace an existing device with a smaller one, only one of the same or larger size, you can add one or more smaller devices to a pool and after it's done balancing stop the array and remove the larger device(s) (one at a time if more than one), obviously only possible if data still fits on the resulting smaller pool.

     

     

    Procedure:

     

    • stop the array
    • on the main page click on the cache device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted)
    • start the array
    • a btrfs device replace will begin, wait for cache activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are.
    • when the cache activity stops or the stop array button is available the replacement is done.

     

     

     

     

    • Thanks 1
  14. 2 minutes ago, dlandon said:

     

    You are saying that the size doesn't show properly at times?  I added a timeout to the command to get the nfs size.  If it is not showing correctly, I may have to increase the timeout.  This was added to prevent hangs when the remote share drops off-line.

     

    As I was cycling through adding and removing the remote share at times it would show me a value under size.  It was always 8.? GBs.  

    During this period the share didn't actually mount. The used and free columns had no value. 

  15. 5 minutes ago, dlandon said:

    It's not mounted.

     

     

    @dlandon and @itimpi Thanks for the quick reply guys. I appreciate your time.

     

    @dlandon I did notice it was not mounted but sometimes it showed me 8ish under the size.  Which striked me as odd.

    @itimpi I hear what you're saying.  I did attempt to browse, unsuccessfully, from the Linux kernel side before associating the mount point with the Krusdar docker. I just didn't want to accept the truth... :( 

     

    Good News.. I just got it to mount!!!  

    I bounced between the CIFS and NFS share for a while like a crazy person. "The definition of insanity"

    Then I had the great idea to leave the unmounted config and reboot. After reboot I saw the same thing but when I started the array the share mounted. YAY!

     

    Now I wonder what protocol would be best, NFS or CIFS ?

     

    2017-09-17 08_04_03-Tower_Main.png

  16. I've used this plugin for a few years now with great success but I would like a little help on this issue.

     

    I have a ReadyNas sharing about 17TBs of Data and I'll like to share the media with my Dockers. 

     

    I am able to load and mount the share via CIFS and NFS but I have run into anomalies.

    1. The Size is show as 0 or incorrect Size (Not that important to me)

    2.  I can't see any contents of the share when browsing via docker or unraid share browser.  Its just show nothing.

     

    Where am I going wrong?

     

     

    Thanks again for the great plugin.

    2017-09-17 07_35_19-Tower_Main.png

×
×
  • Create New...