Jump to content

jordan

Members
  • Posts

    60
  • Joined

  • Last visited

Posts posted by jordan

  1. 1 minute ago, ken-ji said:

    I've never seen the tornado warnings myself. But AFAIK, its usually access issues between the dropbox app, and your files

    Thanks. I had this drive fail yesterday, so DB is currently not running and I'm rebuilding; then I'll be moving the entire array to a new (better cooled) system.  Christmas break is coming up, so I'll have a couple days to dive into it for real. If I do find the problem I'll post a solution back. 

  2. I should add that all of my really important stuff is backed up to a cloud service and all but the last couple of days is on a portable drive in my fire safe (because as much as I love cloud backup, a multi-TB download on my internet connection would be painful).  

     

    edit: by "important stuff" I pretty much mean all of my personal content. My commercial media...well, others have already uploaded that to the cloud, mostly in better quality than my own rips. I:-)

  3. Everything - photos of the kids, home video, all my college coursework, my business files, everything.  Poof - it's just locked inside a dead hard drive. 

     

     

     

    So I just want to say a little thanks to the community and unraid because, of course, the server emailed me the drive suddenly failed, the server is running perfectly fine serving my files from the cache, and I only needed a few minutes to power down the server, and swap in a spare drive which is now being rebuilt. It's been pretty good for a "bad Monday." 

  4. I'm in the middle of switching from an old generic case with an icy-dock 5-in-3 (awful cooling - can't recommend)) to a DS380 from Silverstone with an

    ASRock Z390M-ITX/ac MB (6 SATA onboard + PCIe x4 M.2, one PCIe x16, 2x Intel 1000bt + wifi).  One advantage I just found out about is that the HD trays are powered by a 4 pin Molex and I just confirmed that there is no 3.3v on the power bus for the rack. That means no messing with the pin-bypass hack on shucked drives - it's not powered in this system. 

     

    I'm not sure about the cooling as I've not run it full of disks (SoonTM), but the placement of fans is good. The only issue I can see is that, while one side of the 8x bay is covered with two large fans, the opposite side is the sliding case is flush, so the drive fans pressurize the case and the rear fan draws that air from the drives, over the CPU cooler, then out. I'm running an i5-9400 with the stock cooler and it's about 4x the CPU I need, so I don't expect it to be an issue. 

     

    The unit is reasonably quiet with the three fans running. The PS (300W Silverstone Bronze/80) hasn't fired up the fan yet (it's fanless at idle), but I'm just working through a memory test with no drives. 

  5. On 3/16/2019 at 3:47 PM, nanoblock said:

    I have been browsing around the forums and I see some posts but nothing that has ultimately been able to help me resolve my issue. I have just finished a new nas build with an intel i5 9600k and an asrock z390m motherboard running 6.7.0-rc5. I have followed some guides and have igpu enabled in my bios along with primary graphics adapter as onboard and have added the line append initrd=/bzroot i915.alpha_support=1 under unraid OS section of syslinux configuration. I reboo, then type in modprobe i915 in the cli and do not see the /dev/dri directory. Has anyone been able to crack this and come up with the solution? I am also not sure if its as simple as me missing something in the bios http://asrock.pc.cdn.bitgravity.com/Manual/Z390M-ITXac.pdf page 59 seems to show the settings which would apply to enabling the quick sync.

     

    Thank you in advance for the help.

    Sorry to dig up an old thread, but I've had piss-poor luck choosing MBs for unraid in the past. Did you ever get quick sync working on your i5-9 series on the asrock ITX board?  I'm looking to move to a nicer box and this board seems like a winner if I can get hardware transcoding on the iGPU. 

  6. 4 minutes ago, itimpi said:

    That is the way I am using the DropBox container.

    If you have a reference/tutorial you have seen for creating this, would you drop me a link in the reply? If not I can go through the google, but there are some exceptionally common terms in my likely query so any help or shortcut to resources would be appreciated.  :-)

  7. 2 minutes ago, ken-ji said:

    I think the error is because when you run Dropbox on the array, it correctly sees the filesystem as shfs which is artificially unsupported.

    Some users have just created a ext4 loopback image. I suppose with the container working again, you can have it on the cache drive

    Okay, thanks. That's definitely going to take some reading. 

  8. That got it running. One last question (I hope) - After linking I'm greeted with the ominous

     

    [ALERT]: So your files continue to sync, sign in to your Dropbox account and move Dropbox to a supported file system.

     

    I just paged back through this thread, and I see where it came up last year when DB kicked everything but ext4 to the curb, but I can't tell whether there was a workaround built in, or everyone just bailed on the protected array and using external USB formatted to ext4? Or are those using it installing a trick like https://www.linuxuprising.com/2018/11/how-to-use-dropbox-on-non-ext4.html ?

  9. Just now, ken-ji said:

    you'll need to delete .dropbox-dist from the appdata folder - by default /mnt/user/appdata/dropbox

    the broken versions have left incomplete .dropbox-dist folders which is now screwing it up.

    Ah - makes sense. I didn't think to go back and scrub the old directories out manually. I'll try that. 

  10. I've clearly done something wrong, but I'm at a loss as to what. I initially tried updating the container and came up with an error so I deleted the container and reinstalled from Community Applications. I got (effectively) the same error:

     

    Quote

     

    dropbox: locating interpreter
    dropbox: logging to /tmp/dropbox-antifreeze-Rn6B9w
    dropbox: initializing
    dropbox: initializing python 3.7.2
    dropbox: setting program path '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/dropbox'
    dropbox: setting home path '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155'
    dropbox: setting python path '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155:/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/python-packages-37.zip'
    dropbox: python initialized
    dropbox: running dropbox
    dropbox: setting args
    dropbox: applying overrides
    dropbox: running main script
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/cryptography.hazmat.bindings._constant_time.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/cryptography.hazmat.bindings._openssl.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/cryptography.hazmat.bindings._padding.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/psutil._psutil_linux.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/psutil._psutil_posix.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/linuxffi.pthread._linuxffi_pthread.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/cpuid.compiled._cpuid.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-85.4.155/apex._apex.cpython-37m-x86_64-linux-gnu.so'
    Traceback (most recent call last):
    File "dropbox/client/main.pyc", line 8, in <module>
    File "dropbox/client/features/catalina_migration/catalina_migration_controller.pyc", line 19, in <module>
    File "dropbox/client/features/catalina_migration/catalina_account_context.pyc", line 13, in <module>
    File "dropbox/client/features/catalina_migration/alert_dialog.pyc", line 10, in <module>
    File "dropbox/client/features/file_locking/base_file_locking_alert.pyc", line 13, in <module>
    File "dropbox/client/features/legacy_ui_launcher.pyc", line 21, in <module>
    File "dropbox/client/configuration/manager.pyc", line 45, in <module>
    File "dropbox/client/configuration/utils.pyc", line 18, in <module>
    File "dropbox/client/autorestart.pyc", line 33, in <module>
    File "dropbox/foundation/html_views/common/__init__.pyc", line 9, in <module>
    File "dropbox/foundation/html_views/common/content_metadata.pyc", line 18, in <module>
    File "dropbox/foundation/html_views/common/interface.pyc", line 18, in <module>
    File "dropbox/foundation/context/__init__.pyc", line 14, in <module>
    File "apex/__init__.pyc", line 6, in <module>
    File "<_bootstrap_overrides>", line 153, in load_module
    ImportError: libdropbox_apex.so: cannot open shared object file: No such file or directory
    !! dropbox: fatal python exception:
    ['Traceback (most recent call last):\n', ' File "dropbox/client/main.pyc", line 8, in <module>\n', ' File "dropbox/client/features/catalina_migration/catalina_migration_controller.pyc", line 19, in <module>\n', ' File "dropbox/client/features/catalina_migration/catalina_account_context.pyc", line 13, in <module>\n', ' File "dropbox/client/features/catalina_migration/alert_dialog.pyc", line 10, in <module>\n', ' File "dropbox/client/features/file_locking/base_file_locking_alert.pyc", line 13, in <module>\n', ' File "dropbox/client/features/legacy_ui_launcher.pyc", line 21, in <module>\n', ' File "dropbox/client/configuration/manager.pyc", line 45, in <module>\n', ' File "dropbox/client/configuration/utils.pyc", line 18, in <module>\n', ' File "dropbox/client/autorestart.pyc", line 33, in <module>\n', ' File "dropbox/foundation/html_views/common/__init__.pyc", line 9, in <module>\n', ' File "dropbox/foundation/html_views/common/content_metadata.pyc", line 18, in <module>\n', ' File "dropbox/foundation/html_views/common/interface.pyc", line 18, in <module>\n', ' File "dropbox/foundation/context/__init__.pyc", line 14, in <module>\n', ' File "apex/__init__.pyc", line 6, in <module>\n', ' File "<_bootstrap_overrides>", line 153, in load_module\n', 'ImportError: libdropbox_apex.so: cannot open shared object file: No such file or directory\n'] (error 3)

    Did I miss some dependency or other pre-installation setup requirement?

  11. 2 hours ago, ken-ji said:

    Apologies...

     

    I'm going to rebuild the docker image and see if it still works.

    No need for apologies; volunteer support when you're moving away from a platform is a tough spot. Thanks for checking, though! If it doesn't update, I'll try a different method. Thanks, again!

  12. (warning: Docker newbie)  I just installed it and am getting the 

     

    Quote

    [ALERT]: You're using an old version of Dropbox. Please update to the latest version to continue using Dropbox.

    error. I saw a post from back in 2017 that this was an issue, but am not clear on how it was resolved. The docker interface in unRaid claims that the application is up to date (and have already executed a check for updates). 

    Apologies if I'm leaving out any critical information, this is my first experience with docker containers. 

  13. Hi, just upgraded to v6; went for a clean install so I could start my "new" build with just the parity-protected array intact. [edit: upgrade from 5.0.6]

     

    I see that the default setup is to set every top level directory as a Share. Convenient, except that I only need/want three of the forty shared. I also see that the default is to not share any of the Disk mounts as accessible, which is what I would prefer.  

     

    I have two(-ish) questions:

     

    1. How do I share the disks?  (i.e. if I go into a Windows browser and type \\tower\disk1, the location doesn't exist, nor is it in the \\tower directory list)

        [edit]  found this one in an old thread - enabling sharing for Disks is under Settings|Global Share Settings instead of in the Share tab

    2. Can I mass-delete all of the existing Shares (not the folders/files, just the shares), or do I have to manually go into all 37 shares I don't want and set them to Export = no?

       2a - if I set the shares to export=no and access the files from the /diskn share, does unRaid ignore the split, or must I also set allocation to fill-up and split level to manual? 

     

    Thanks!

×
×
  • Create New...