Leaderboard

Popular Content

Showing content with the highest reputation on 06/13/19 in all areas

  1. As off January 25, 2022 this image has been deprecated, read the notice here https://info.linuxserver.io/issues/2022-01-25-pyload/ Application Name: Pyload Application Site: https://pyload.net/ Docker Hub: https://hub.docker.com/r/linuxserver/pyload/ Github: https://github.com/linuxserver/docker-pyload Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, rather use the linuxserver.io forum for support.
    1 point
  2. Application Name: Dokuwiki Application Site: https://www.dokuwiki.org/dokuwiki Docker Hub: https://hub.docker.com/r/linuxserver/dokuwiki/ Github: https://github.com/linuxserver/docker-dokuwiki Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, rather use the linuxserver.io forum for support.
    1 point
  3. Thanks..... It's nice to hear this sort of comment once in a while.
    1 point
  4. Just use the Limetech USB Creator tool to recreate the USB stick contents. The only file you need to keep from the old install is the licence key file. You will then have a new install as far as any settings are concerned.
    1 point
  5. Couple of things I'd try at this point: 1. Update the nvidia drivers with the "clean install" option 2. Build a new VM with just the software you need to duplicate the blackout to eliminate if it's a OS issue or software 3. It's weird how the screens black out and consistently after a given period of time. A second power supply powering the 2070's can eliminate if it's a power supply issue. This reminds me of a 386 computer I was trying to diagnose back in the 90's. It had suffered a lightning related surge and it would only boot to DOS if the machine was powered up for no less than ten minutes and then warm rebooted. That bids the question - any surges that you are aware of? Shit gets real weird after one. We're talking gateways to alternate realities opening up in your PCIe lanes kinda weird.
    1 point
  6. If you put files in mount_unionfs (which you should!) they will behind the scenes really be added to rclone_upload and once uploaded, removed from rclone_upload - all at the same time as always being available and not 'moving' in mount_unionfs. My setup isn't designed for copies or syncs - it's for moving files to gdrive for seamless playback and library management aka plex. If this is what you want, you'll need to start a new thread to get help doing that - or read the rclone help pages which are good. Using my setup to do this is overkill as you would only need to mount if you lose files.
    1 point
  7. Google wont just close an account, if they do, they must give you time to download your stuff. If you want some stuff local, then put it local somewhere and "connect" it to your unionfs this thread first page unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO:/mnt/user/mount_rclone/google_tdrive_vfs=RO /mnt/user/mount_unionfs/google_vfs mine looks like this unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RO:/mnt/user/Archiv=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs I use the upload script to upload all files in /mnt/user/Archiv if its older then X days. I dont use seperate mounts for different folders, i have everything in /Archiv/ In all my apps (plex, radarr, sonarr, nzbget, smb share) i just use /mnt/user/mount_unionfs/google_vfs
    1 point
  8. Again and for the last time, disk is failing, and in a big way: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 191 184 051 - 8098 5 Reallocated_Sector_Ct PO--CK 183 183 140 - 508 196 Reallocated_Event_Count -O--CK 116 116 000 - 84 197 Current_Pending_Sector -O--CK 200 200 000 - 310 198 Offline_Uncorrectable ----CK 200 200 000 - 330 200 Multi_Zone_Error_Rate ---R-- 195 193 000 - 2341
    1 point
  9. 1. as long as you copy your scripts, rclone config and the check files you should be fine 2 & 3. you'd have to create another mount for your gdrive remote if you want to upload files unencrypted. anything you add to rclone_mount (not advised), unionfs_mount (ok), or rclone_upload (ok) will get encrypted 4. you would add to plex the folders you created in your new gdrive remote mount 5. yes - rclone works on W10. be careful with making RW changes from 2 machines to the same mount files/folders- read rclone forums if you need to do this 6. my first couple of posts explain why the cleanup script is 'needed' re upload yes - I would have the upload script running on a 30m/1hr schedule to make your life easier. All your 'media' dockers - plex, radarr, sonarr etc should be mapped to the unionfs folder 1. I think yes - as above this is the file 'view' that is what your dockers etc need to use 2. had to google passerelle, but yes - files added to mount_unionfs that don't exist on gdrive (mount_rclone) are automatically added to rclone_upload for upload 3. It's a decrypted view of the files ON gdrive. mount_unionfs is a decrpyted view of what's ON gdrive (mount_rclone) PLUS what hasn't been uploaded yet (rclone_upload) but still needs to be accessible to plex etc until it 'moves' to gdrive. Never point plex at mount_rclone - always use mount_unionfs
    1 point
  10. linuxserver - never had any docker problems that I can remember
    1 point
  11. I don't think people can begin to understand the frustrations of support. Things that just annoy me, not necessarily here, also Github & Discord. 1. Not performing any sort of rudimentary search to see if the issue has already been noted/solved. 2. Not including any sort of docker run command 3. Not including logs 4. Ignoring repeated requests to do any of the above. 5. Posts that say "This is broken for me" or "Me too" they contribute nothing to help. 6. Entitlement, not realising we're volunteers, any demand for anything, I do this for free, you can ask, but you can't demand. 7. Any post that assumes the issue isn't PEBCAK, generally, that is the issue, and even if it isn't, it's a good default position to start from. 8. Any PM asking me for support that is unsolicited on here or Discord. 9. Any form of requesting to be spoon fed. You run a server, you're a server sys-admin, onus is on you to run it, not me. 10. And the biggest one. Not reading the documentation. RTFM! The amount of times we have to point this out in Discord is quite frankly, ridiculous. My position now is I'll contribute the reciprocal amount of effort to answer that people put into their request. If you're too busy to do troubleshooting and post necessary information, I'm too busy to answer. Golden rule of support: Include all the information someone needs to try and recreate your issue, if we can't recreate it, it's probably something we can't fix. I do sometimes regret my more grumpy posts, I forget the people are probably good, pleasant folk, I'm just burnt out by it all. The thing is, it really is the minority that offend. We just don't hear from people that happily use containers without issues.
    1 point
  12. ok thank you i have some stupid questions: 1- if i change my server to another place with another ip adress, is it a problem, should i configure the link to gdrive again? 2-if i want to upload (and keep them locallly) files from plex in my unraid server to gdrive uncrypted in which folder should i move them in unraid? 3-if i want to upload (and erase them locallly to earn space disk) files from plex in my unraid server to gdrive uncrypted in which folder should i move them in unraid? 4- with plex server, which files in unraid i need to track to use only gdrive uncrypted files? 5- can i use a small energy computer like nuc with windows 10 to install plex server and have access to gdrive data uncrypted? what software should i use to mount the gdrive uncrypted to windows 10 and decrypt it? because my unraid server actually costs me too much electricity 6- and please reexplain me the utility of clean script i don't understand at all what it does the procedure i use to upload to gdrive uncrypted is: 1- copy/moving file to root-> unraid->user->mount_unionfs->google_vfs 2- launch the upload script is it correct? am i right to think : 1- root-> unraid->user->mount_unionfs->google_vfs is store in local? 2- root->unraid->user->rclone_upload->google_vfs is just a passerelle, i don't have to put anything in it. When i launch script it takes new data from root-> unraid->user->mount_unionfs->google_vfs and delete it automatically by script when is done, am i right? 3- i don't understand the utility of the file root-> unraid->user-> mount_rclone-> google_vfs is it store in local or in gdrive? is it just the view decrypted of my gdrive? plex needs to be configure on this file? the question can be resume by, which file rclone is stored in my unraid server, which file rclone is gdrive connection and not stored in my unraid server i had my first upload today i am very happy thanks to you 😍
    1 point
  13. That's correct - read the earlier posts. If you've not updated an existing file or deleted a file, there's nothing to cleanup!
    1 point
  14. Thank you very much 🙂 Been hoping for this one for a while!
    1 point
  15. It's a custom wood frame build roughly based on this guide. https://tombuildsstuff.blogspot.com/2014/02/diy-server-rack-plans.html My build is a 16U instead of 20, I omitted a few of things like wheels (furniture movers work just fine on wood floors). I don't have a real use for a rear door either.
    1 point
  16. Key elements of my rclone mount command: rclone mount \ --allow-other \ --buffer-size 256M \ --dir-cache-time 720h \ --drive-chunk-size 512M \ --log-level INFO \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --vfs-cache-mode writes \ --bind=$RCloneMountIP $RcloneRemoteName: \ $RcloneMountLocation & --buffer-size: determines the amount of memory, that will be used to buffer data in advance. I think this is per stream --dir-cache-time: sets how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache, so if you upload via rclone you can set this to a very high number. If you make changes direct to the remote they won't be picked up until the cache expires --drive-chunk-size: for files uploaded via the mount NOT via the upload script i.e. if you add files directly to /mnt/user/mount_rclone/yourremote. I rarely do this and it's not a great idea --vfs-read-chunk-size: this is the key variable. This controls how much data is requested in the first chunk of playback - too big and your start times will be too slow, too small and you might get stuttering at the start of playback. 128M seems to work for most but try 64M and 32M --vfs-read-chunk-size-limit: each successive vfs-read-chunk-size doubles in size until this limit is hit e.g. for me 128M, 256M,512M,1G etc. I've set the limit as off to not cap how much is requested so that rclone downloads the biggest chunks it can for my connection Read more on vfs-read-chunk-size: https://forum.rclone.org/t/new-feature-vfs-read-chunk-size/5683
    1 point