Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. I just tried again and I managed to login which is one step further than I get on my VM, but then I just got a blank black screen in the VNC Viewer
  2. oh sorry, you want to be able to view the S3 files. Found this on google - https://aws.amazon.com/es/customerapps/s3fm-a-free-online-file-manager-for-amazon-s3/?nc1=f_ls
  3. rclone mount <remote_name>:<s3_bucket_name> <mount_path> for <mount_path> use something like /mnt/user/picture_archive. There are lots of other options for mount, but that'll do the job https://rclone.org/commands/rclone_mount/
  4. I loved those books as a kid - the Belgariad was one of the first fantasy series I read.
  5. I just tried installing but got an error oot@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Google-MusicManager' --net='host' --cpuset-cpus='5,6,7,19,20,21' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'PORT'='5920' -e 'PASSWORD'='Removed' -v '/mnt/user/media/other_media/music/':'/music':'ro' -v '/mnt/cache/appdata/dockers/Google-MusicManager':'/config':'rw' false 'rix1337/docker-google-musicmanager' false Unable to find image 'false:latest' locally /usr/bin/docker: Error response from daemon: pull access denied for false, repository does not exist or may require 'docker login'. See '/usr/bin/docker run --help'. The command failed.
  6. +1 from me as well. I've setup a second 'pool' as well following these instructions which has really helped. But, if the mover could also operate on this second pool it'd be a perfect solution.
  7. I haven't changed the crontab (not sure where this setting is) - I've got docker log rotation on. Does that setting control all logs, or just the main container log?
  8. I just spotted that my /log/nginx/access.log file has hit 4GB with entries going back nearly a year. is there a way to control the size of the log, other than deleting it (which I have)? Thanks
  9. yep - I worked that out yesterday when I did a few googles. It's working again today, so the key's been updated.
  10. I've never used MakeMKV before - I didn't realise you had to buy a key. I was basing my info on these instructions: https://github.com/jlesage/docker-makemkv#expired-beta-key
  11. ok, just restarted docker and it's saying expired again - help please
  12. Help please - my key has expired for the first time and doesn't want to renew root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='MakeMKV' --net='br0' --ip='172.30.12.99' --cpuset-cpus='5,6,7,19,20,21' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'MAKEMKV_KEY'='BETA' -e 'AUTO_DISC_RIPPER'='0' -e 'AUTO_DISC_RIPPER_EJECT'='0' -e 'AUTO_DISC_RIPPER_PARALLEL_RIP'='0' -e 'AUTO_DISC_RIPPER_BD_MODE'='mkv' -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='000' -e 'APP_NICENESS'='' -e 'DISPLAY_WIDTH'='1280' -e 'DISPLAY_HEIGHT'='768' -e 'SECURE_CONNECTION'='0' -e 'X11VNC_EXTRA_OPTS'='' -e 'AUTO_DISC_RIPPER_INTERVAL'='5' -e 'AUTO_DISC_RIPPER_MIN_TITLE_LENGTH'='' -e 'TCP_PORT_5800'='7806' -e 'TCP_PORT_5900'='7906' -e 'TCP_PORT_51000'='51000' -v '/mnt/user/':'/storage':'ro' -v '/mnt/disks/ud_pool/appdata/other/MakeMKV':'/output':'rw,slave' -v '/mnt/disks/':'/disks':'rw,slave' -v '/mnt/user/':'/user':'rw' -v '/mnt/cache/appdata/dockers/MakeMKV':'/config':'rw' 'jlesage/makemkv' Update: It must just take a while to do - just opened the container again without re-starting and it was fine
  13. Yeah, parity is invalid as the cache problems @johnnie.black has been helping with have slowed down my whole array.
  14. When I try to view my syslog I get this error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 16777224 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(394) : eval()'d code on line 73 highlander-diagnostics-20180929-0801.zip
  15. Thanks - just added this one
  16. I switched back to latest a couple of days ago and it seems to be working fine
  17. 2nd Anniversary Update 2 years in and I'm still tinkering with my system and this month has seen some interesting updates! I think I really am done now as apart from replacing drives, I've got very few outstanding projects. Re-organised Cores: I normally have 3 VMs running (2x W10 + PfSense) with the 3rd W10 VM only on occasionally. So, Ive split my cores this way: Core 0: unRAID exclusive Core 1: unRAID, emulator Pins, Plex Cores 2-4: unRAID, VM1, Plex Cores 5-7: unRAID, VM3, All Dockers Cores 8-10: unRAID, VM2, Plex Cores: 11-13: unRAID, pfsense VM, Plex Isolating my dockers (except Plex) to cores 5-7 which needs access to my machine power has worked brilliantly for VM performance. VM3 is only turned on occasionally and is light-use, so loading all docker activity onto its cores has worked out fine and the VM is still usable - just a little bit laggy at times. Added 1050 TI 4GB EVGA GeForce GTX 1060 SC GAMING 6GB: In the past I've resisted gaming on my PC, but I got onto the geforce now beta which only allowed one login at a time. My kids were starting to fight, so I bought the 1060 so they both could play rocket league. It's proven to be a great investment and I wish I'd done it sooner Edit: sent the 1050 TI back as is was noiser than my whole machine thanks to the fan min being set at 45%.......Upgraded to a 1060 for better future proofing, yet still lower power usage Increased cache pool to 1TB: With my previous 500GB cache with my workload the mover was running almost constantly at times, sometimes struggling to offload faster than files were being added and I think sometimes files were being moved before they were in their optimal directories, which was messing up my array split levels. I bought a new 1TB MX500 to go alongside my existing 2x500GB. I'm happy with this move as it makes my future upgrade path the 1TB+ SSDs easier Removed 2TB Hitachi UD: I had to free up slot for 1TB SSD. With the additional cache capacity I have enough space for torrents so I haven't missed this drive Changed 500GB (2x250GB) Unassigned to 250GB RAID1: Feel happier this way as I've now got redundancy if one drive goes Added Crashplan Docker: As part of my new safety first strategy as I've ditched my rclone backup and I'm now using crashplan. Luckily I'd tried home in the past, so I qualified for the $2.5/mth for 12 months offer Moved more HD movies to gdrive via rclone: In addition to all my TV shows, I've now moved over 50% of my HD movies to google drive. I'll keep shifting more to make room for the 4K library I'm slowly growing locally. Eventually I'll backup my rclone mount to Crashplan in case google change their policy in the future. My average CPU load is creeping up at around 40%, which is acceptable given how much my usage has increased since I started 2 years ago when it was around 15-20%. To Do: get google music manager working in a docker so can upload music library get another geforce now account so can have 3 concurrent gamers
  18. Which one is yours on dockerhub? Thanks
  19. no problems for me, but maybe because I've done a few reboots over the last couple of weeks
  20. It must be internal buffers as it doesn't seem to be in the docker image and it can't be on the cache drive as there's no mapping for it. Thanks - I've checked history and the 'effective' rates are much higher than the rates I'm seeing on my network, so I'll have to trust the dedepe is working... I don't think throwing any more cores will help my scenario. I've pinned it to three cores shared with other dockers and they are only running at about 50%. I tried temporarily giving it another 3 cores, but it didn't make a difference. I'll just have to be patient - my valuable files should take around 10 days and then I'll start adding my media a bit at a time, and that will take months to backup.
  21. has it been added yet? I tried adding via your docker template but it didn't work, so I probably did something wrong
  22. I've just started using this docker today. I have a couple of questions please: 1. where does the docker store temporary files while it is uploading, or does it do everything in ram? 2. Any tips on increasing upload speeds? e.g. does assigning more cores help or does the docker not need a lot of resources? I seem to be averaging only around 3-4Mbps, peaking at 12Mbps if I'm lucky.
  23. Thanks for posting this as it made me research what was going on with my ASUS bios. I hadn't installed the latest one released in April because it said 'beta', but after researching I learnt that ASUS (irresponsibly in my opinion) had labelled it beta when it was a full release, because they didn't want to encourage out-of-warranty users to install, rather than just saying 'install at your own risk'.
  24. +1 TBS 6.6 build working now - thanks for fixing so quickly
  25. @CHBMB I think something's happened between Rc4 and the full release, as I just rolled back to 6.5.3 and all is ok and Rc4 was fine as well
×
×
  • Create New...