Jump to content

Geekd4d

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by Geekd4d

  1. 36 minutes ago, itimpi said:

    Not sure to be honest!

     

    One thing to remember is that UD devices cannot participate in user shares whereas pools can.   Whether that will affect your workflow I am not sure.

    Yeah that's what I am trying to get away from. the UD devices, as I have to mount them with /mnt/disks/<device> in all the containers.

  2. 1 minute ago, itimpi said:

    Mover does not support moving files between pools.

    good to know, am I way overthinking this and I should just leave well enough alone? hah! Mainly I am trying to make this so I don't have the hard paths in all my docker containers, and since nvme is supported in a pool.. I would assume this would be a lot easier to manage.

  3. I've been running my unraid server for years, and when I set it up, unraid did not (officially) support nvme's in cache pools (As far as I understood at the time). So i set it up as an unassigned device, mounted it as nvme_docker and moved ALL my appdata there by changing all the default paths in my docker containers to /mnt/disks/nvme_docker/appdata/<app>    I know (now) that this is a mistake.
     

    I am trying to undo all my newbie mistakes without losing any data. I have a dozen docker containers I really don't want to have to redo, but I really want to correct this.
     

    The way I have things setup is as follows:

    • 12x 8TB data drives (working on upgrading these to 6x16tb's to reduce power draw)
    • 2x 8TB Parity (upgrading to 16tb as well)
    • 2x 4TB enterprise ssd's as my raid 1 cache pool (for Downloads only)
    • 1x 1TB ssd for appdata backups (USB 3.2 as an unassigned disk. replacing this weekend with 4tb ssd for longer term backups)
    • 1x 500gb NVME for everything docker as an unassigned device. /mnt/disks/nvme_docker (ALL appdata and docker.img)
    • 1x 500gb spare usb 3.1 external (unassigned If I need to move data around, going to toss things here)

     

    The way I set it up at the time, I have every docker container's appdata mounted to /mnt/disks/nvme/appdata/<app name> - yes, I know that's not ideal/correct, but at the time I set this up years ago, It was how I could make it all work, and I didn't fully understand the cache pool. so before you chastise me, I'm trying to fix it! I want to be able to upgrade any cache pool in the future easily, which this solution is not sustainable like it is.

    -------------------------------------------------------------------------------------------------------------

    I want to move this to a tiered Cache pool setup:

     

    • 4tb ssd cache drive: cache_appdata (for containers that use up 50+gb of data but need to be fast accessible such as nextcloud, librephotos, plex metadata)
    • 4tb ssd cache drive: cache_ssd (for normal downloads, move to array after a week etc)
    • assign the nvme to a new cache pool: nvme_appdata and this will be where my docker.img (unless I can move to directory based.. I don't know how to do that yet though. Haven't researched it yet) as well as my appdata for the rest of my docker containers that do not eat up a lot of storage but I prefer them to be fast accessible.

     

    The Way I am approaching this is as follows, and this is where I am not sure if I am doing this correctly. I know this is a lot of work, and I appreciate if you have read this far and have some advice.

     

    • Stop everything first obviously.
    • Copy all files from /mnt/disks/nvme to my temporary drive using either MC or rsync.
    • create all my cache pools the way I want them. (assign nvme to new cache pool, and break up the existing into 2 single drive pools)
    • start array so that the appropriate appdata folders are created on cache_appdata, cache_ssd and nvme.
    • Move files back to where I think they should be /mnt/nvme/appdata/ for docker containers /mnt/cache_appdata/appdata/ for larger ones.
    • Edit all docker containers - set them back to /mnt/user/appdata/ and then set the cache to the correct cache for the container. Don't start them.
    • Move docker img file to /mnt/nvme/ (system/docker?) and then re-point the docker settings to this file. Edit the default docker appdata settings to /mnt/user/appdata (its currently /mnt/disks/nvme_docker/)
    • start things back up and see what breaks?

     

    Now I know I cant just use mover to acomplish all this, that's what I want to get to.. so In the future, if I need to upgrade a cache drive, I can use mover to move everything from one cache drive to another, upgrade a drive, then use mover to move them back.. Or.. am I WAY off here?

     

    Thanks for sticking through this. I can explain anything in more detail if needed, But I think this is the correct approach.. Anyone want to let me know if I'm completely wrong? Have you done the same at some time?

  4. I just noticed a warning that my rootfs is getting full from the fix common problems plugin.

    I haven't added anything new to the system in quite a while, but over the weekend I did have a drive start to throw errors, so I removed that drive, and resynced parity to remove the drive from the system until I get replace it.

     

    64GB memory on the system, been running absolutely fine until I had that disk throw some errors.
    Hopefully someone can help me out here. thank you!
     

    I was reading squid's topic on this and I am putting both my diagnostics and output generated by the memorystorage plugin/script:

     

    This script may take a few minutes to run, especially if you are manually mounting a remote share outside of /mnt/disks or /mnt/remotes
    
    /usr/bin/du --exclude=/mnt/user --exclude=/mnt/user0 --exclude=/mnt/disks --exclude=/proc --exclude=/sys --exclude=/var/lib/docker --exclude=/boot --exclude=/mnt -h -d2 / 2>/dev/null | grep -v 0$' '
    4.0K /tmp/ca_notices
    5.3M /tmp/fix.common.problems
    24K /tmp/unassigned.devices
    12M /tmp/community.applications
    8.0K /tmp/notifications
    624K /tmp/plugins
    4.0K /tmp/emhttp
    18M /tmp
    4.0K /etc/docker
    4.0K /etc/netatalk
    260K /etc/libvirt-
    4.0K /etc/pkcs11
    136K /etc/lvm
    8.0K /etc/libnl
    8.0K /etc/ssmtp
    16K /etc/samba
    4.0K /etc/rsyslog.d
    40K /etc/php-fpm.d
    16K /etc/php-fpm
    8.0K /etc/php
    40K /etc/nginx
    2.0M /etc/file
    24K /etc/avahi
    48K /etc/apcupsd
    4.0K /etc/sysctl.d
    48K /etc/security
    232K /etc/ssl
    608K /etc/ssh
    100K /etc/pam.d
    4.0K /etc/openldap
    88K /etc/mc
    48K /etc/logrotate.d
    4.0K /etc/sensors.d
    36K /etc/iproute2
    36K /etc/modprobe.d
    7.2M /etc/udev
    8.0K /etc/cron.daily
    8.0K /etc/cron.d
    12K /etc/dbus-1
    4.0K /etc/sasl2
    68K /etc/profile.d
    56K /etc/default
    328K /etc/rc.d
    8.0K /etc/acpi
    12M /etc
    20K /usr/info
    916K /usr/include
    9.1M /usr/man
    13M /usr/doc
    4.0K /usr/systemtap
    21M /usr/libexec
    4.0M /usr/src
    26M /usr/local
    267M /usr/lib64
    111M /usr/share
    42M /usr/sbin
    362M /usr/bin
    853M /usr
    4.0K /lib64/xfsprogs
    4.0K /lib64/e2fsprogs
    972K /lib64/security
    24M /lib64
    21M /sbin
    14M /lib/modules
    4.0K /lib/systemd
    76K /lib/modprobe.d
    36K /lib/dhcpcd
    6.5M /lib/udev
    20M /lib
    16K /run/blkid
    4.0K /run/avahi-daemon
    308K /run/udev
    336K /run
    11M /bin
    257M /var/sa
    620K /var/local
    4.0K /var/kerberos
    32K /var/state
    2.0M /var/cache
    4.0K /var/lock
    28K /var/tmp
    8.0K /var/spool
    1.2M /var/run
    1.9M /var/log
    3.3M /var/lib
    266M /var
    4.0K /root/.local
    4.0K /root/.cache
    4.0K /root/.config
    56K /root
    1.2G /
    0 /mnt/rootshare
    0 /mnt
    
    
    Finished.
    NOTE: If there is any subdirectory from /mnt appearing in this list, then that means that you have (most likely) a docker app which is directly referencing a non-existant disk or cache pool
    
    script: memorystorage.plg executed

     

    plexymcplexface-diagnostics-20220503-1624.zip

  5. 8 minutes ago, regorian said:

    I am having a peculiar issue I hope someone can help with.

     

    I run two instances of Sonarr - one for 2160p media and one for everything else. I am also running a Syncarr container alongside them to synchronize specific titles from the non-2160p instance to the 2160p one. What I am seeing is that whenever these containers are restarted (for example, during automated backup from the Backup/Restore Appdata Plugin), the API key for my 2160p instance changes, which breaks the connection that Syncarr is setup to use. While this is not a huge deal, it's becoming annoying to have to go back in and update the container config with the new API key.

     

    Is there a way to configure Sonarr to NOT update the API key that I am missing? Does it have to do with running two instances of Sonarr from the same docker template? My primary Sonarr instance never changes, which is the weird part. Any help would be appreciated.

     

    Instance 1: Primary, Non-4K media

    image.thumb.png.1b38714ac825b184220e403da1df213a.png

     

    Instance 2: 4K only media

    image.thumb.png.723c9bd93b26e1aded6b72db1680a364.png

    I feel that this is because you are running 2 of the same containers using the same docker image. Try switching your 4k one to binhex's repo. That's what I did and its been perfect since then.

     

    We also would need to see the location of your docker image there.. but.. give the switch a shot none the less.

     

  6. Just now, binhex said:

    yes indeed, if you dont do this then you wont get a working incoming port.

    Appreciate it! (Maybe add that to your pia post there so you don't get this same question a lot lol)

     

    Seriously. thank you for the work you do on this and other containers :)

     

  7. Ombi has been running fine for me since mid march.

    I made a change to my library structure last week and since then, nothing has updated in ombi.
    I re-added all my libraries in the plex settings, but every time it scans it errors out in the log with hundreds of these entries.
    I had switched the plex agents to the legacy trying to fix this, but alas it did not work.
    Any Suggestions?
    Edit: After going through the logs, it looks like its only hung up on one out of 6 libraries.. which makes it even odder. (still 250+ errors)

    2020-09-29 07:30:00.399 -04:00 [Error] Exception when adding new Movie "Ice Age: Collision Course"
    Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: [. Path 'MediaContainer.Metadata[0].Guid', line 1, position 3545.
       at Newtonsoft.Json.JsonTextReader.ReadStringValue(ReadType readType)
       at Newtonsoft.Json.JsonTextReader.ReadAsString()
       at Newtonsoft.Json.JsonReader.ReadForType(JsonContract contract, Boolean hasConverter)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader reader, JsonArrayContract contract, JsonProperty containerProperty, String id)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, Object existingValue, String id)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
       at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
       at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
       at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
       at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
       at Ombi.Api.Api.Request[T](Request request) in C:\projects\requestplex\src\Ombi.Api\Api.cs:line 78
       at Ombi.Api.Plex.PlexApi.GetMetadata(String authToken, String plexFullHost, Int32 itemId) in C:\projects\requestplex\src\Ombi.Api.Plex\PlexApi.cs:line 159
       at Ombi.Schedule.Jobs.Plex.PlexContentSync.ProcessServer(PlexServers servers, Boolean recentlyAddedSearch) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexContentSync.cs:line 264

     

  8. 41 minutes ago, JorgeB said:

    I would say that helium level below 100 is bad, it means it's leaking.

    I'm not sure what the level should be at.. ie:  if that's a % or not. there should an acceptable loss though over time though. What that is.. noooooooo idea :)

    I just checked my helium drives, and I don't see a helium indicator in the smart report. (running an extended test now though)

  9. 1 hour ago, eeans said:

    I'll pull this one and put in a warranty claim then.

     

    The server took a rather large hit during shipping, knocking some disks free from their mountings. These were then loose to bang around in the case...when I got it I could hear them moving before I opened the box :( In hindsight I should have removed them before shipping to be safe.

     

    I think I got lucky and only lost this one disk but would there be a good stat(s) to check on the rest of the disks to see if they were also damaged?

    Look at the Helium level - anything lower than 25 is bad it looks like (thats the threshold for being bad in that smart report)
    As well as the raw read error rate.. 

     

  10. SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
      1 Raw_Read_Error_Rate     PO-R--   088   088   016    -    5767168
      2 Throughput_Performance  --S---   130   130   054    -    108
      3 Spin_Up_Time            POS---   148   148   024    -    447 (Average 439)
      4 Start_Stop_Count        -O--C-   100   100   000    -    44
      5 Reallocated_Sector_Ct   PO--CK   100   100   005    -    80
      7 Seek_Error_Rate         -O-R--   100   100   067    -    0
      8 Seek_Time_Performance   --S---   128   128   020    -    18
      9 Power_On_Hours          -O--C-   099   099   000    -    8712  < 363 days, no bueno... should be YEARS.
     10 Spin_Retry_Count        -O--C-   100   100   060    -    0
     12 Power_Cycle_Count       -O--CK   100   100   000    -    44
     22 Helium_Level            PO---K   001   001   025    NOW  1     < ---- bad. no bueno

    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

     

    Damn, this got me looking at all my HGST ultrastars.. 
    That's right under a year of use, I would swap/pull that drive immediately and start working on a warranty call. 
    Not sure how white label ones are handled.. maybe through the vendor on the drive? (my white label ones are for HP servers.. I would call HPE in my case.. YMMV of course)

  11. 4 minutes ago, Squid said:

    Are you sure about that?  What about the /config mount which usually isn't shown unless you hit Show More Settings

    HAH! Damn, was hoping I could stealthfully edit that and none would be the wiser!

     

    I had to go into advanced mode there, then look at more settings. normal 'more settings' (where I am used to looking) wasn't giving me that option.

     

    Appreciate the response!

    Thank you

  12. What about when all mounts ARE mounted with the slave option and this is still showing up, even after a reboot?

    My initial thought would be to delete the container and re-create it, I'm just worried about data loss in there.

     

    Edit: I found it! The config variable in my container was not showing me until I went in advanced mode.
    Sorry about that!

     

  13. Hey @jude I think I see it.. its the permissions on the files..

    Edit - pulled my config screenshots as it looks like its mainly a file permission issue for you now that I re-read the posts.

     

     

    I have a credentials.conf which also has my username on one line and password on another line, I dont remember if I created that or if the container made that...

    Check your permissions in that directory on the files.. just noticed yours was different (I'm not saying mine are correct, just mine work)

     

     

    sh-5.0# ls
    bin   config  dev        etc   lib    mnt  proc  run   srv              sys  usr
    boot  data    downloads  home  lib64  opt  root  sbin  supervisord.pid  tmp  var
    sh-5.0# cd /config
    sh-5.0# cd openvpn
    sh-5.0# ls -lrat
    total 20
    -rwxrwxr-x 1 nobody users  869 Oct 22  2019  crl.rsa.2048.pem
    -rwxrwxr-x 1 nobody users 2025 Oct 22  2019  ca.rsa.2048.crt
    -rwxrwxr-x 1 nobody users   26 Jul  1 15:35  credentials.conf
    -rwxrwxr-x 1 nobody users 3174 Jul  1 15:35 'CA Montreal.ovpn'
    drwxrwxr-x 2 nobody users  101 Jul  1 15:35  .
    drwxrwxr-x 8 nobody users 4096 Jul 17 15:30  ..
    sh-5.0# 

     

    Hope this helps!

  14. 55 minutes ago, Pducharme said:

    @Geekd4d in the deluge WebUI, at the bottom right, you should see your "WAN IP".  If this IP is the VPN ip, then i guess it's working in the VPN.  There might be better way, just something quick I though.

    Yeah, its the right IP. Confirmed there and via curl out.. 

    Just strange, I've never seen that speed before.. kinda unnerving.

  15. How Can I tell if the vpn is working?

    With the latest release yesterday my vpn speeds are a little unrealistic.

     

    I hop to the terminal within the docker container and run a curl ifconfig.co and it DOES show my vpn ip, 

    BUT: my speed has dramatically increased.. From 1-2mib/sec down to 25-30mib/sec down.. 6gb in 5 minutes? uhh..... that's not right.

    I love the speed, but this seems unrealistic.

     

     

  16. 10 hours ago, strike said:

    This container does not support br0 at this time. Switch to bridge. Or use the deluge desktop/thin client to get access over br0, that might work. See the faq for how to connect with the deamon.

    That's exactly what it was!

    popped it on bridge. vpn confirmed and webui works!

    Thank you so much!

  17. 1 hour ago, strike said:

    This container does not support br0 at this time. Switch to bridge. Or use the deluge desktop/thin client to get access over br0, that might work. See the faq for how to connect with the deamon.

    Thank you! Had a feeling it was something network related. I'll poke it later on today and report back. (upgrading my parity drives with faster ones)

     

    thanks again!

     

  18. I'm having a heck of a time switching from windows to unraid for my new nas.. :)

     

    I'm having an issue with the webui being reset anytime I attempt to hit it. (getting connection reset by peer)

    In the console I can verify the vpn is running and I do get an external ip that is not my own.

    webui works when not connected to vpn.

    new opvn files tossed in /config/openvpn - connecting to PIA Montreal which does have port forwarding.

    LAN_Network IS correct. (192.168.128.0/24 is my network)

    adblock killed. (dunno why I was thinking that)

     

    Thoughts?

    The only thing I noticed in the log was that the docker interface is defined as eth0 - and I have br0 configured in setting. not sure if that is transparent to the image and its just using eth0 as the generic 1st nic?

     

    Appreciate the help, this community has been great!

     

    supervisord.log

     

  19. So, I am in the middle of building out a new plex server running in a u-nas 810 chassis and I am looking at both UnRaid and Freenas to accomplish this. This will be my first real home deep dive into a linux based system other than raspberry pi's scattered across the house.

     

    8x6TB Hdd's run through LSI raid card as JBOD. (or should I just not flash it to IT mode?)

    i5 9400

    32gb DDR4 (max mobo can handle)

    3x500gb Samsung SSD's on onboard sata (I might be able to go 4 at a later date if it helps caching performance)

    1x500gb NVME (was going to be my boot drive. but looks like I shouldn't do this with an unraid setup? how does this affect plex? - Seems people like sandisk?)

     

    The system's main purpose is plex, and then I would like to use this opportunity to learn docker for tossing in additional apps (sonarr?, ombi?, calibre?)

    From my reading, sounds like ZFS and unraid might be the best? Where should docker containers go?

     

    So, Is Unraid right for me? 

    (convince this old windows admin to switch!)

     

×
×
  • Create New...