Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. @teh0wner @Spatial Disorder I don't know what to do. The script's working, but mergerfs seems wonky for you. I built mine 3 days ago and I'm on: root@Highlander:/mnt/user/public/mergerfs# mergerfs --version mergerfs version: 2.29.0-5-gada90ea FUSE library version: 2.9.7-mergerfs_2.29.0 fusermount3 version: 3.9.0 using FUSE kernel interface version 7.31 I'd ping @trapexit or raise an issue on github to see if he can shed any more light. Hopefully it can be resolved whatever the problem is. Until then, I guess I won't reboot! You could in the meantime switch back to Unionfs for a bit- everything will still work, you just won't get the file management benefits. Just use fusermount to unmount whatever mergerfs created, and then create a unionfs mount.
  2. Are you sure? It looks like it didn't install mergerfs and hence nothing to mv. There should be pages and pages of text after the docker run command. Also, the test build I did is in /mnt/user/appdata/other/test/mergerfs so you need to do: mv /mnt/user/appdata/other/test/mergerfs /bin
  3. @teh0wner all looks ok. I just did a test install of mergerfs: mkdir -p /mnt/user/appdata/other/test/mergerfs docker run -v /mnt/user/appdata/other/test/mergerfs:/build --rm trapexit/mergerfs-static-build and all was ok. mergerfs version: 2.29.0-5-gada90ea FUSE library version: 2.9.7-mergerfs_2.29.0 using FUSE kernel interface version 7.31 'build/mergerfs' -> '/build/mergerfs' I'd ping @trapexit or raise an issue on the mergerfs site as all is well for me.
  4. @teh0wner can you post your chosen mount options as I think you've got something wrong in there @Spatial Disorder have you tried installing mergerfs again since the change?
  5. If I understand what you're asking correctly, I think you'll have to create a union remote with your ro remotes, and then add them to another union with a ff or similar policy after your rw remotes. It's not quite as functional as mergerfs yet, but maybe post your usecase on the thread now while they are building as I think ncw did do a shout out for usecases.
  6. @trapexit thanks for building the docker and taking the time to register to let us know about the update. Appreciate it! To be honest I haven't noticed any problems, but will never say no to more security!
  7. Luckily, the next release of rclone looks like it will include rclone union so we'll have an all-in-one solution: https://forum.rclone.org/t/multiwrite-union-test-beta/14458/43?u=binsonbuzz https://github.com/rclone/rclone/blob/pr-3782-union/docs/content/union.md Current mergerfs command is: mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true Union uses most of the same commands so the equivalent that the script can create (not sure what happens with 'rclone config update' if the remote doesn't exist yet) : rclone config update $RcloneUnionRemoteName --union-upstreams $LocalFilesLocation $RcloneMountLocation --union-action-policy all --union-create-policy ff --union-search-policy ff --union-cache-time ???? I'm not sure what the best union-cache-time will be yet - I've asked on the rclone forums. Once the remote is made it will just need mounting: rclone mount $RcloneUnionRemoteName: $RcloneMountLocation & I'm not sure whether thinks like dir-cache-time, drive-chunk-size, vfs-read-chunk-size etc will be needed on the union remote mount, or whether setting on the child rclone mount will be enough - I think it will be as the union is just accessing files via the upstreams.
  8. I just tried the latest version again and I need to rollback to 0.6.5-ls50 (didn't try other builds, just picked this one first) for the UI to load. I get this error in the logs with the latest version: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/calibre-web/cps.py", line 34, in <module> from cps import create_app File "/app/calibre-web/cps/__init__.py", line 68, in <module> config = config_sql.load_configuration(ub.session) File "/app/calibre-web/cps/config_sql.py", line 348, in load_configuration update({"restricted_tags": conf.config_mature_content_tags}, synchronize_session=False) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/query.py", line 3910, in update update_op.exec_() File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1692, in exec_ self._do_exec() File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1873, in _do_exec values = self._resolved_values File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1839, in _resolved_values desc = _entity_descriptor(self.mapper, k) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/base.py", line 402, in _entity_descriptor "Entity '%s' has no property '%s'" % (description, key) sqlalchemy.exc.InvalidRequestError: Entity '<class 'cps.ub.User'>' has no property 'restricted_tags' Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/base.py", line 399, in _entity_descriptor return getattr(entity, key) AttributeError: type object 'User' has no attribute 'restricted_tags' root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='calibre-web' --net='br0.33' --ip='192.168.30.72' --cpuset-cpus='8,9,13,24,25,29' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8083'='8083' -e 'DOCKER_MODS'='linuxserver/calibre-web:calibre' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/':'/user':'rw' -v '/mnt/user/media/other_media/books/':'/books':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/mnt/user/media/other_media/books/':'/library':'rw' -v '/mnt/cache/appdata/dockers/calibre-web':'/config':'rw' 'linuxserver/calibre-web'
  9. Does anyone have a custom script they are running from Sonarr? Mine used to work, but they fail the Sonarr Test. I've read that sonarr sends a test event and looks for an exit code of 0, but even when I do the simple script below it says the exit code is 255. Does anyone know what I'm doing wrong? Thanks #!/bin/bash exit 0
  10. Feature Request: Can we add paths to pre and post processing scripts from User Scripts in the future please e.g. /boot/config/plugins/user.scripts/scripts/CA-Backup_Stop/script which doesn't work at the moment. Thanks
  11. I'm impressed you uploaded that much content so quickly, or did you just stress test using your own kit? In my experience, if you have enough bandwidth then the only limit on plex is the same as if the files existed locally i.e. have you got enough CPU power to handle the streams (and RAM for rclone) because as far as plex is concerned, it's just opening normal media files.
  12. @teh0wner mergerfs doesn't have anything to do with moving files from local to the cloud - the upload script does that. - If you don't want files from local to be uploaded then don't run the upload script. - If you want to add 2 local folders to mergerfs for Plex etc and not have one uploaded then you'll have to do some mergerfs tinkering. Not hard you just have to read up a bit on mergerfs.
  13. New files added to the mergerfs mount get added to the local folder and then moved to the cloud via the upload script. Changes to files already in the cloud happen in the cloud without downloading and then reuploading (a la unionfs). You can add files safely to local if you want but directly to the rclone mount isn't advised, as writing direct to the mount outside rclone move is not 100% reliable. mergerfs isn't a physical folder, so files can't be 'moved' from it - files are moved to the cloud from the real local location. Correct although it's the folder you should be using for all activities i.e. sonarr, plex etc as it allows them to see all files available, regardless of whether they are in the cloud, or local waiting to be uploaded to the cloud
  14. You're not using my mount script that would have created a mountcheck file in /mnt/disks/secure - the upload script stops if it can't find that file
  15. It's come back - and I chucked in another 32GB last night highlander-diagnostics-20200226-1756.zip
  16. I used the Skylake emulation above - somehow with the upgrade or my panicked tinkering, the edit was lost from my xml
  17. I've got 8 X 16GB and previously 6 sticks. I don't know much about ram settings as I gave up on overclocking about 15 years ago, so I always just go for the auto settings in the bios. I'll monitor for a bit to see if things seem odd.
  18. Well, that was a wild goose chase for 6 hours. Because I made so many hardware changes (new ram, new gpu, moved cards and disks) as well as removing most kit to clean, I assumed I'd done something wrong. The issue was pfsense needs an xml edit to work post 6.8.2 - I'd lost my edit and because I was panicking, I forgot about the change I'd made a few months ago. I think the edit must have dropped off because of the hardware changes. @Chess thanks for trying to help. What's the source for the Ram timings?
  19. I'm in a very weird situation. My existing pfsense vm won't boot and even when I try creating a fresh instance, that won't boot either. I added two new sticks of ram today and I'm running a memtest now. But, surely even if one of the sticks is faulty it wouldn't only impact freebsd based VMs????? Anyone have any ideas??? I'll post some diagnostics once the memtest finishes.
  20. The script doesn't control how your apps access your mount. Somewhere along the way, one of your apps is accessing your mount oddly e.g. some users in this thread have had problems with bazarr and have created a separate mount/client id combo for this.
  21. No tutorial needed. If you've setup gdrive_media_vfs to be gdrive:crypt, then just create another remote with another name pointing to the same location i.e. gdrive:crypt with the same passwords. The only difference is create a different client_ID so that one gets the ban, if any. To be honest I've only had an API ban once I think and that was when I didn't know what I was doing yonks ago.
  22. I'm getting errors when I try to install the python module psutil. On Google the same solution is listed - installing python-devel. https://github.com/giampaolo/psutil/issues/1143 Is it possible to add this to nerdpack please?
  23. you point it to the remote you want to mount i.e. the decrypted remote Many people have setup a separate remote for uploading to reduce the risk of their mounted 'streaming' remote getting an API ban because of odd uploading behaviour, strange subtitle behaviour etc etc
×
×
  • Create New...