wdelarme

Members
  • Posts

    106
  • Joined

  • Last visited

Posts posted by wdelarme

  1. I installed with no problem everything ran great for a few hours then unraid wouldn't allow any connections from any machine on my network also couldnt get to GUI, Couldnt access shares or dockers but my pfsense vm was accessible....I had to init 6 the unraid box and upon reboot everything worked fine for a few hours and again same issue. I reverted back to previous version of unraid and all is fine. I tried updating again and a few hours later same issue so I had to init 6 the box again and everything worked fine again for a few hours. I'm now reverted back for the second time with previous unraid version and all is stable. Is there a specific log file I can get to when I update again for the third time that might help when I lose all connectivity besides my vm pfsense that I could post here to help? I did the update assistant and it said everything was ok to update just to verify it wasn't a known docker issue

    • Upvote 1
  2. Trying to setup for Dropbox but for some reason the Dropbox share that is on UNRAID that i'm pointing to won't allow my to change the owner from root. What am I missing below is my startup script. In unraid terminal ls -lart Dropbox is root root and wont allow me to access it from mac or winblowz

    ls -lart output

    drwxrwxrwx  1 root   root     0 Jul 22 20:57 Dropbox/

     

    Below is my startup Script

     

    #!/bin/bash
    #----------------------------------------------------------------------------
    # This script mounts your remote share with the recommended options.         |
    # Just define the remote you wish to mount as well as the local mountpoint.  |
    # The script will create a folder at the mountpoint                          |
    #----------------------------------------------------------------------------

    # Local mountpoint
    mntpoint="/mnt/user/Dropbox"

    # Remote share
    remoteshare="dropbox:"

    #--------------------------------------------------------------------------------------------------------------


    mkdir -p $mntpoint
    rclone mount --max-read-ahead 1024k --allow-other $remoteshare $mntpoint &
     

  3. On 5/1/2018 at 10:19 AM, ken-ji said:

    Its been a while, and I haven't found the time to work out the one really annoying bug(?) with dropbox when inside a docker container.

    Whatever dropbox client uses to ID the host, it changes and invalidates when unRAID is rebooted.

     

    that said, @jowi I can force the image to update, and it should pull the latest dropbox binaries in. 

     

     

    this error? 

    WARNING:tornado.access:404 HEAD /blocks/7516203/........................

  4. On 3/24/2018 at 8:27 AM, SoAvenger said:

    You should post about this problem over in the link you shared.

    How do you transcode to ram without it filling rootfs?

    I followed binhex instructions (Probably borked it up somewhere) and its transcoding to /tmp in the rootfs, so every morning unraid is dead with rootfs at 100% till I reboot. Not sure how I messed up the configuration

  5. my program guide quit loading today. Im not sure what happened is anyone else having issues with just the program guide? Its been showing a lot more blank thumbnails for a few weeks. But today all of the thumbnails are gone and it just says its rebuilding my program guide

  6. 14 hours ago, Djoss said:

    It's probably the update to the version 6.6.0.  I'm working on doing a new image with this version.

     

    Could you check in /config/log/service.log.0 for the update URL?  It's basically a file that CP tries to download that contains the upgrade code.  I would look at it to make sure I don't miss anything.

     

    root@Tower:/mnt/cache/appdata/CrashplanPRO/conf# ls

    adb/                 my.service.xml   service.login  ui.log.xml     ui_app.properties  upgradeui.properties

    default.service.xml  service.log.xml  service.model  ui.properties  upgradeui.log.xml

     

    Not seeing the files you asked about

  7. Anyone know if any of these can be deleted safely? 

     

     

    root@Tower:/var/lib/docker/btrfs/subvolumes# du -sh *
    452M    057376e3e3a9528456906c791cccec6a4a6168166aaf74515ec3243cc029d8b5
    133M    2aff5d62d9973d7d835172add3637d319c59fb23261f3e7585177d807b54103e
    133M    31a69c0effc2810791b5bd2da01003066ec3025a3a9f86c85f571348e7864be9
    781M    4ffef8e048a3ea0dea485172cb38c0654b0241622b54e0e614bbc3d40ee58787
    221M    534efc53144748dec70cddd1f886bc3dfb734910eaedcfe75d8b53a1efc8a86f
    452M    5660a586f8a093620f88cd258c999d617792938e1aa757a5e0060cc9d1d2cf33
    5.4M    5785053242be33b1fdd14a71e44fc128eb6ed4d1c966deb671162ba306af21f5
    138M    5c295472d96dbc3d8b191265b05208c23f0b10a46a7535546ddf54c932ac86a8
    452M    6307aab5d4411630b2fc6281f3316f68030bca3178ff42cad7ff9eb7490ae8f5
    16K     76fc7dc46a28fafe2fb1aa0498f75a2597e55258c8ecdadac2d3629daabfdca0
    5.4M    791a484a86aa1be08492675888b87a3047503540eb1a6a5f9cc80dafcb682362
    221M    7aa1429a7cc5117a1582a5f3ca9783da3148160a8af9e2893143076946371633
    302M    80aa5f60ef64ec756ad568600d00208a569e4c564dbc4fc6108cc43f1c28810d
    302M    85d149f8719158f311fe5e075cfa0797a3d0bdeb40088c461009f05abfcd83b7
    133M    98aafcdad7691340a0014e8a717355a595d33f387053a8e08ed361d3696b1c8b
    4.0K    9f4a4515ea588a5b6fe7fdc0447f33690065e8bccd76d0507297d0302e80947b
    307M    a2556f951c415130980b2cb15272c450145e5e1dba1f13076fcfc5df2d0050f1
    133M    b1baa391602d948d19af2b26d0db48606c686a7c5fee1a8befe82e926e8231cf
    806M    b4d0b022b4186c11996d456d5767f6e69b2368ed2100480bf1b6c01d5fbfdb4f
    806M    b4d0b022b4186c11996d456d5767f6e69b2368ed2100480bf1b6c01d5fbfdb4f-init
    806M    bb85d9804e721d8454c4b8625ac1882e54a819e930c9bcdc9f57a8b0240c3f82
    452M    c367262e9aa663336ab21549dfe78d9a4dc0680a520397a35e9ae68e8f85ece3
    452M    c6573b0c64cb7a4679bc99526d9a7ff9c9d7a1e6e25847ebe1024715976214f9
    147M    cde133f12dbfe5880403b98cc42526085c2231fc29285dec1d0e90ec4b728509
    221M    ddb0c62fe3e3502e426d578f1e07c62573661d3aab744af68bb5939d54c991e1
    965M    ebd269e600fc8ad776079368cee7d709d8c2fc0c2ca8e6ffb3a24ec61a875d76
    781M    ebd269e600fc8ad776079368cee7d709d8c2fc0c2ca8e6ffb3a24ec61a875d76-init
    root@Tower:/var/lib/docker/btrfs/subvolumes# ls -lart
    total 0
    drwx------ 1 root root   20 Mar 31 21:39 ../
    drwxr-xr-x 1 root root   10 Mar 31 21:40 9f4a4515ea588a5b6fe7fdc0447f33690065e8bccd76d0507297d0302e80947b/
    drwxr-xr-x 1 root root   20 Mar 31 21:40 76fc7dc46a28fafe2fb1aa0498f75a2597e55258c8ecdadac2d3629daabfdca0/
    drwxr-xr-x 1 root root   38 Mar 31 21:40 791a484a86aa1be08492675888b87a3047503540eb1a6a5f9cc80dafcb682362/
    drwxr-xr-x 1 root root   38 Mar 31 21:40 5785053242be33b1fdd14a71e44fc128eb6ed4d1c966deb671162ba306af21f5/
    drwxr-xr-x 1 root root  182 Mar 31 21:41 a2556f951c415130980b2cb15272c450145e5e1dba1f13076fcfc5df2d0050f1/
    drwxr-xr-x 1 root root  102 Mar 31 21:41 80aa5f60ef64ec756ad568600d00208a569e4c564dbc4fc6108cc43f1c28810d/
    drwxr-xr-x 1 root root  132 Mar 31 21:43 5c295472d96dbc3d8b191265b05208c23f0b10a46a7535546ddf54c932ac86a8/
    drwxr-xr-x 1 root root  140 Mar 31 21:43 cde133f12dbfe5880403b98cc42526085c2231fc29285dec1d0e90ec4b728509/
    drwxr-xr-x 1 root root  140 Mar 31 21:43 b1baa391602d948d19af2b26d0db48606c686a7c5fee1a8befe82e926e8231cf/
    drwxr-xr-x 1 root root  140 Mar 31 21:43 31a69c0effc2810791b5bd2da01003066ec3025a3a9f86c85f571348e7864be9/
    drwxr-xr-x 1 root root  140 Mar 31 21:43 2aff5d62d9973d7d835172add3637d319c59fb23261f3e7585177d807b54103e/
    drwxr-xr-x 1 root root  156 Mar 31 21:43 98aafcdad7691340a0014e8a717355a595d33f387053a8e08ed361d3696b1c8b/
    drwxr-xr-x 1 root root  140 Mar 31 21:43 534efc53144748dec70cddd1f886bc3dfb734910eaedcfe75d8b53a1efc8a86f/
    drwxr-xr-x 1 root root  140 Mar 31 21:43 ddb0c62fe3e3502e426d578f1e07c62573661d3aab744af68bb5939d54c991e1/
    drwxr-xr-x 1 root root  150 Mar 31 21:43 7aa1429a7cc5117a1582a5f3ca9783da3148160a8af9e2893143076946371633/
    drwxr-xr-x 1 root root  164 Mar 31 21:44 4ffef8e048a3ea0dea485172cb38c0654b0241622b54e0e614bbc3d40ee58787/
    drwxr-xr-x 1 root root  184 Mar 31 21:44 ebd269e600fc8ad776079368cee7d709d8c2fc0c2ca8e6ffb3a24ec61a875d76-init/
    drwxr-xr-x 1 root root  250 Mar 31 21:44 ebd269e600fc8ad776079368cee7d709d8c2fc0c2ca8e6ffb3a24ec61a875d76/
    drwxr-xr-x 1 root root  102 May 24 04:00 85d149f8719158f311fe5e075cfa0797a3d0bdeb40088c461009f05abfcd83b7/
    drwxr-xr-x 1 root root  102 May 24 04:00 6307aab5d4411630b2fc6281f3316f68030bca3178ff42cad7ff9eb7490ae8f5/
    drwxr-xr-x 1 root root  102 May 24 04:01 5660a586f8a093620f88cd258c999d617792938e1aa757a5e0060cc9d1d2cf33/
    drwxr-xr-x 1 root root  102 Jun 14 04:00 c6573b0c64cb7a4679bc99526d9a7ff9c9d7a1e6e25847ebe1024715976214f9/
    drwxr-xr-x 1 root root  102 Jun 14 04:00 057376e3e3a9528456906c791cccec6a4a6168166aaf74515ec3243cc029d8b5/
    drwxr-xr-x 1 root root  102 Jun 14 04:00 c367262e9aa663336ab21549dfe78d9a4dc0680a520397a35e9ae68e8f85ece3/
    drwxr-xr-x 1 root root  102 Jun 14 04:00 bb85d9804e721d8454c4b8625ac1882e54a819e930c9bcdc9f57a8b0240c3f82/
    drwxr-xr-x 1 root root  142 Jun 14 04:00 b4d0b022b4186c11996d456d5767f6e69b2368ed2100480bf1b6c01d5fbfdb4f-init/
    drwx------ 1 root root 3476 Jun 16 17:37 ./
    drwxr-xr-x 1 root root  294 Jun 16 17:43 b4d0b022b4186c11996d456d5767f6e69b2368ed2100480bf1b6c01d5fbfdb4f/
    root@Tower:/var/lib/docker/btrfs/subvolumes#
     

  8. I basically just want to do it for experience. If I remove the drive with array offline then set that drive in unraid to "no drive" and start the array will that move the files to another drive? I hope I explained that right. Is moving those files off a manual evolution?

  9. I'd like to add a second parity drive on a 5 disk array that has only one parity. What would be the best way to move these drives around without losing info on the array I have now?

     

    As it is now I have

     

    Drive 1 Parity 3TB

    Drive 2 Array 3TB

    Drive 3 Array 3 TB

    Drive 4 Array 3TB

    Drive 5 Array 3TB

     

    I want it to be like this

    Drive 1 Parity 3 TB

    Drive 2 Parity 3TB

    Drive 3 Array 3TB

    Drive 4 Array 3 TB

    Drive 5 Array 3 TB

     

    I have approx. 2 TB of data on the array now