Tuftuf

Members
  • Posts

    248
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Tuftuf

  1. I have an older unraid version 6.9.1, but it's been stable for the purpose it serves.

     

    Installing any game server docker, fails at updating streamcmd. I've used this before and always been quite straightforward. I had this docker running on another server I shutdown, I expected to start it up easily on this machine.

     

    Focus was on installing ich777/steamcmd:valheim but tried 5 others at least.

     

    Console log only shows

    ---Ensuring UID: 99 matches user---
    ---Ensuring GID: 100 matches user---
    ---Setting umask to 000---
    ---Checking for optional scripts---
    ---No optional script found, continuing---
    ---Taking ownership of data...---
    ---Starting...---
    SteamCMD not found!
    steamcmd.sh
    linux32/steamcmd
    linux32/steamerrorreporter
    linux32/libstdc++.so.6
    linux32/crashhandler.so
    ---Update SteamCMD---
    

     

    If I use the console on Valheim docker and run steamcmd manually, I get the following error.

     

    root@b6c7065c4195:/serverdata/steamcmd# ./steamcmd.sh 
    Redirecting stderr to '/root/Steam/logs/stderr.txt'
    threadtools.cpp (3409) : Assertion Failed: Failed to create thread (error 0x1)

     

    If I install steamcmd/steamcmd from the command line using the cs go example, only changing the local paths. This error keeps popping up in the log.

     

    src/clientdll/cminterface.cpp (2861) : Assertion Failed: m_VecNetAdrNetFilterCMs.Count() > 0
    

     

    I also tried cm2network/steamcmd which seemed to install and download all the initial steam files.

     

    Am I missing something really simple here?

  2. @DZMM I moved over from plexguide to your script over a year ago. Using the old version of the script without cache settings works as expected. If I use the new version with cache defined, I get an extra folder created within my mount point the same name as my mount point.

     

    Am I missing something or should the configure below valid? The paths have all changed as I moved it to a new system. 

    I'm not certain if I want the cache setting or not but I dislike the new script not working correctly for me, I've read before that it was not getting maintained within the rclone code.

     

    I've also always been mounting mine as gdrive & tdrive. Looking at it again recently, I see I don't ever use the gdrive sections and they don't seem to be required.

     

    0.96.4

    # REQUIRED SETTINGS
    RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files you want to upload without trailing slash to rclone e.g. /mnt/user/local
    RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART in docker settings page
    MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount

     

    0.96.9.2

    # REQUIRED SETTINGS
    RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="250G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount

     

     

    I have gdrive & gcrypt, I carried the config over but recently noticed I don't use them or even mount them. Ok to remove?

    Do you use gdrive or just team drives (now shared drives)

     

    I'm missing scope = drive but its the default option (just checked)

     

     

    [gdrive]
    client_id = clientid@google
    client_secret = AAAAAAAAAAAAAAAAA
    type = drive
    token = {"access_token":""}
    
    [gcrypt]
    type = crypt
    remote = gdrive:/encrypt
    filename_encryption = standard
    directory_name_encryption = true
    password = PASS1
    password2 = PASS2
    
    [tdrive]
    client_id = clientid@google
    client_secret = AAAAAAAAAAAAAAAAAAAA
    type = drive
    token = {""}
    team_drive = AAAAAAAAAAAAAAAAAAA
    
    [tcrypt]
    type = crypt
    remote = tdrive:/encrypt
    filename_encryption = standard
    directory_name_encryption = true
    password = PASS3
    password2 = PASS4

     

  3. I'm setting up another system and changing how my paths are arranged.

     

    The main question here is, are people using the cache setting? I'm reading on other forums and places that the cache setting shouldn't be needed and hasn't been for a long time, since the ranged gets were added.

    Do I need this cache mount?

    Can I just remove the 3 lines defining it?

     

    /mnt/storage is a SSD cache pool.

    EDIT - I have changed the /mnt/remotes/rclonefs to be on the SSD.

     

     

    I was going to place the rclone mount in /mnt/remotes as I expected this to be read only, remote filesystem mounted.

     

     

    RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="250G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable

  4. The issue is that I can no longer boot from USB, I think if I take the NVME out everything will be back to normal.

     

    I installed windows as a VM with the NVME passed through, this was working great until I rebooted the machine. It rebooted to windows, following that I rebooted it myself selecting the USB sandisk and unraid started to boot. I wish I took a screen shot, but it locked up. Maybe was showing vfio errors.

     

    Rebooted again I can't boot from the USB stick, can't see the boot manager anymore.

    Created a new USB stick, can't boot from that either.

     

    On the flip side windows runs really fast and it's the first time I've really run it on bare metal with anything installed. I've Ordered a LSI card and plan to move the disks into my other server and then will see what actually happened here.

  5. I use vlans at home and this caused all the traffic to leave via the management address even after binding it to an interface within the rclone upload script. The fix was to add a route second routing table and route for the IP I assigned it to.

     

    The subnet is 192.168.100/24

    The gateway is 168.168.100.1

    The IP assigned to rclone upload is 192.168.100.90

     

    echo "1 rt2" >> /etc/iproute2/rt_tables
    ip route add 192.168.100.0/24 dev br0.100 src 192.168.100.90 table rt2
    ip route add default via 192.168.100.1 dev br0.100 table rt2
    ip rule add from 192.168.100.90/32 table rt2
    ip rule add to 192.168.100.90/32 table rt2

     

    • Like 1
  6. On 3/16/2020 at 9:30 AM, DZMM said:

    The unmount script doesn't have any fusermount commands, as the new script structure makes this difficult (mount locations are variable).  The script is intended to be a cleanup script to be run at array start.

     

    Do you need it to run at array stop?  If so, just add your own fusermount commands to the script.

     

    My array was not stopping and I blamed this when I couldn't quite work out where the fuser command was, I'll have to see if there is something else causing it not to stop as it looks to be unrelated.  I don't plan on stopping it just yet, its running its purpose. Main focus is getting things ready to back it all up.

  7. 2 hours ago, DZMM said:

    I think you're overthinking things - rclone does not add any extra considerations to your local setup, other than bandwidth and enough storage for local files that are pending upload.  For my setup I have made the following choices:

     

    It's the time to think, I previously moved my whole plex and related setup to a hosted dedicated server 1gb/1gb as with gdrive my upload is not good enough to keep up 400/35. Cost wise it would now work out around the same for me to upgrade to a business connection which gives me options of 400/200 or 750/375.

     

    I recently built a 2 in 1 gaming pc on a 7700 and since I've had an Intel CPU begging me to use quicksync I've been looking at options to bring my plex setup back home. Right now it only has 1 SSD and 1 NVME but that will change soon.

    2 hours ago, DZMM said:

    1. plex appdata on an unassigned Nvme - probably overkill, but I want my library browsing to be as fast as possible and the drive was on sale.

    2. A mergerfs union of 2 old unassigned 1TB SSDs in a pool and /mnt/user/local - if the SSD pool is full then new nzbget/qbittorrent files get added to the array instead i.e. like a 2nd cache pool.  I do this to avoid my new nvme cache drive, to try and avoid 'noisy' writes to a HDD and because I need an SSD to keep up with my download speed.

     

    Did you place your pool as "LocalFilesShare="/mnt/disks/NVMEpool

    and array as LocalFilesShare2="/mnt/user/local"

     

    I'm checking its just a case of placing the shares in the order you want them to be used or was there more to it?

  8. @DZMM Do you mount the mergefs within /mnt/user? I had read some recommendations to place it in /mnt/disks and then use RW,Slave option for dockers however I'm not certain if that was old information or not.

     

    Previously I had used service accounts, but I've not set that here. Is the 750GB limit an upload limit, or does it include streaming (or is that just API limit) I don't expect to be uploading more than 750gb per day.

     

    I have some concerns that my array may not keep up with downloads, extraction etc etc. I thought about putting the 'local' mount point on an NVME/SSD. Have you or anyone done such configurations?

     

    I have almost everything working (*plex is misbehaving). Added mergefs mount to /user within docker and Plex scanned all the movies and tv over night. However I now can't access the Plex UI locally but accessing movies and files is fine, accessing from Plex.tv is fine.

    Accessing Plex UI directly gives connection closed or timed out.

     

     

     

    @neow I only just started using this whole process on Unraid, started on the original scripts and then moved onto the new ones. I only had to change the settings in the new scripts near the top of the file to match my requirements and used the same paths or names within the different scripts. Then it worked. I believe it's referring to the name of your share within rclone.conf

     

     

     

  9. 26 minutes ago, senpaibox said:

    Did you forget the colons?

    rclone lsd tdrive:

     

    Thank you :) That's a really good start.

     

    root@Firefly:~# rclone lsd tcrypt:
              -1 2019-04-09 20:41:53        -1 movies

     

    4 minutes ago, DZMM said:

    you can just copy the rclone.conf for that system to /boot/config/plugins/rclone - assuming you are using the unraid rclone plugin.

    I was copying the encrypted part from it but it has many service accounts defined as well starting fresh seemed good, finally trying to understand this.

     

    Yes using Unraid and the rclone plugin. I guess I can look into the next bit now.

  10. thanks I've seen that now, but I'm still getting stuck at almost the first step.

     

    Tried a working client id/key to test and created a new one.

    Completed the remote auth and provided response.

    I've selected the correct team drive once it was listed.

     

    But verifying the mount fails.

     

    root@Firefly:~# rclone lsd tdrive
    2020/03/06 22:14:06 ERROR : : error listing: directory not found
    2020/03/06 22:14:06 Failed to lsd with 2 errors: last error was: directory not found

  11. I'm already using rclone encrypted with tdrive on another os/app (plexguide) but I'm just not quite following how to mount my library on to Unraid.

     

    Watching the video and there are some differences. It could just be my head is hurting.

     

    What goes as the root folder? I can make the service keys, I have another system I can look at its rclone.conf that's mounting this tdrive. Just need to get the final pieces together to get it mounted on Unraid.

     

    
    root@Firefly:~# rclone config
    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n
    name> 13
    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    Storage> 13
    client_id> 1.....................................apps.googleusercontent.com
    Google Application Client Secret
    Setting your own is recommended.
    Enter a string value. Press Enter for the default ("").
    client_secret> .......................
    Scope that rclone should use when requesting access from drive.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Full access all files, excluding Application Data Folder.
       \ "drive"
     scope> 1
    ID of the root folder  
    Enter a string value. Press Enter for the default ("").
    root_folder_id>   

     

    Some progress :)

    Current remotes:

    Name                 Type
    ====                 ====
    tcrypt               crypt
    tdrive               drive

     

  12. After spending a year or so enjoying passthrough VM's on Ryzen, I had a good idea what I could get out of it.

    Both the users access all the games via Steam clients or Steamlink type devices. I'm only looking for around 60fps and really not looking at 4k gaming or anything.

     

    Ended up building a system with the following

    X-Case XK445S 4U

    7700k

    32GB Ram

    1070TI

    1080TI

    Array Disk - 128GB SSD (This is just a temp solution and will put some disks in it at some point)

    Unassigned Disk - Sabrent 1TB Rocket NVMe PCIe M.2 2280

     

    I know it's not advised to share cores between VM's, however.

     

    VM1 1070ti, Cores 2+6 & 3

    VM2 1080ti Cores 1+5 & 7

     

    Currently getting 60 FPS in Quake Champions, Overwatch, Minecraft & C&C RA 3.  GPU's will be cpu limited but they do work.

     

    Same results on both machines, need more testing to see if any real slowdowns. 

  13. I went with a cheaper option due to needing a spare computer at a later date to run CCTV somewhere, for the moment I'll try running two VM's. This let's me sit and plan a new build properly.

     

    Managed to get a 7700k, ram mobo and cooler for reasonable price. Already had a spare board and intend to take 32GB out of my Ryzen system.

     

    It'll be a 7700k with either an Asus Prime-A 270 or Asus Strix 270F. Will have both boards.

     

    I'll get to test how well the VM's run, hoping to get away with giving each VM 1 real core and splitting a 3rd between them.  Which may well fail since he has 6 vcores on the Ryzen system.

     

    One good thing about this is freeing up my Ryzen system for other tasks.

  14. Hello, 

     

    Currently have a Ryzen 1700, It's currently used for my dockers and a gaming vm, all access is via streamline but performance has been ok. I can't say if how close to native is anymore as I'm not passing through a monitor directly.

     

    I have a spare intel 270 chipset board, I could get a 7700k and get a PC up and running. However 4c/8t would be a bit tight to run two  windows VMs.

     

    Also looking at intel 300 chipset board with a 9700k

     

    Also looking at x299 boards and 7920x.

     

    I will be splitting the load between my current system and the new one, however I would like 2 gaming pcs in the new one and if possible spare resources for work based vms (no gpu)

     

    anyone want to make any suggestions or comments?

    • Like 1
  15. I have yet to test anything i've seen in there to see if it makes any difference, but needed to save the link somewhere!

     

    http://mathiashueber.com/amd-ryzen-based-passthrough-setup-between-xubuntu-16-04-and-windows-10/

     

     

     

    <iothreads>6</iothreads> <cputune>  <vcpupin vcpu='0' cpuset='0'/>  <vcpupin vcpu='1' cpuset='1'/>  <vcpupin vcpu='2' cpuset='2'/>  <vcpupin vcpu='3' cpuset='3'/>  <vcpupin vcpu='4' cpuset='4'/>  <vcpupin vcpu='5' cpuset='5'/>  <vcpupin vcpu='6' cpuset='6'/>  <vcpupin vcpu='7' cpuset='7'/>  <vcpupin vcpu='8' cpuset='8'/>  <vcpupin vcpu='9' cpuset='9'/>  <vcpupin vcpu='10' cpuset='10'/>  <vcpupin vcpu='11' cpuset='11'/>  <iothreadpin iothread='1' cpuset='0-1'/>  <iothreadpin iothread='2' cpuset='2-3'/>  <iothreadpin iothread='3' cpuset='4-5'/>  <iothreadpin iothread='4' cpuset='6-7'/>  <iothreadpin iothread='5' cpuset='8-9'/>  <iothreadpin iothread='6' cpuset='10-11'/> </cputune>

     

  16. To restart nginx i've been having to use both

     

    /etc/rc.d/rc.nginx restart

    and /etc/rc.d/rc.nginx stop

     

    Quite often checking its status will show its still running. So make sure it's really closed. It doesn't want to close gracefully :)

    /etc/rc.d/rc.nginx status 

     

    Running 6.5.1rc3 for around 12hours now, but my system is still under 50%. I think I need to be starting new dockers, checking app store using dockerhub etc before I see the memory grow. Will see how the next day or two goes.

    • Upvote 1
  17. 2 minutes ago, John_M said:

     

    Do you see any nginx-related errors in your syslog, like I'm seeing?

     

     

    I just searched the syslogs in my older diags but I don't see any. They are linked in the other thread in the first post.

     

    I did see some OOM / memory errors when this occurred a few days ago but I believe they were docker attempting to write something. Can't seem to find them now.

  18. Just now, John_M said:

     

    I don't want to hijack this thread in case the two aren't related but I'll update to the rc and run a parity check in safe mode and report any findings in the other thread.

     

    I was running a parity check for 20hours of the 32hours uptime.