skaterpunk0187

Members
  • Posts

    50
  • Joined

  • Last visited

Posts posted by skaterpunk0187

  1. I just think xmp is flaky. I have had tons of issues blue screen, random reboots, hard locks, just to name a few on my own and client machines. I've had them run great for weeks to over a year then bam irratice pc behavior. On both windows and linux PCs. Turning off xmp always fixes the problems. You're not trying to get the most fps out of a server anyways it's really not needed.

    • Like 1
  2. First off I'd like to say tdarr is awesome and saved me just under 10tb, before I had an issue and my drives decided to loose their partition table and lost everything (not tdarr related). Now I'm using it to transcode as I add media back. I don't have any issues, I have a questions and a scenario. I'm wondering how the output folder option works. Will it follow the directory structure? Like if enabled will it keep the same structure like original file is in /media/tv/show/season/show-S01E01.mkv and using the output folder option will it move it to /new-media/tv/show/season01/show-S01E01.mkv. To be automated to move files to the show name and season same with movies. What I want to do is use the cache drive to move ripped disks to the media shares and have tdarr pick them up from cache (before mover) transcode it then move it to the array. Have tdarr watch /mnt/user/media/tv then move the completed file to /mnt/user0/media/tv. I would like to have plex auto watch the /user0/ to auto add media and not have it watch the /user/. That way Plex doesn't pick up the newly added files before tdarr can work its magic and get metadata and scan for intros for the old files. Most of the time plex doesnt see the file type and size change and I get a playback error and so it doesnt redownload the art and posters and scan all over again to detect intros.  <--- or maybe a future feature.

  3. I created a telegraf.conf with all info needed for the Ultimate Unraid Dashboard. It only has the sensors needed and none of all the extra. Its much easier to read only having 80 lines instead of 10,000. https://github.com/skaterpunk/UUD/blob/main/telegraf.conf

    I would say double check your telegraf.conf settings and make sure that telegraf has the network type set to host. If your still not getting only getting none for the host variable, maybe something didn't go right pulling the image and setting up the container. Also check the telegraf logs they may help too. I have two identical Unraid servers my main has no errors in the telegraf log the backup nothing but errors. I even copy and pasted the telegraf.conf file only changing the database name.

    • Thanks 1
  4. On 2/24/2022 at 11:37 AM, falconexe said:

    Does anyone know of a way to backload ALL Historical Data from Tautulli into InfluxDB via Varken (or another API/plugin)? I have data going back years in Tautulli, and I would love to access it all through the UUD. I did some research and everything I have seen is no this is not possible. If you have seen this as a viable option, please let me know and I will add it.

    Looking in the Tautulli code for varken it looks like it could be possible. snip it of code

     def get_historical(self, days=30):

           influx_payload = []

           start_date = date.today() - timedelta(days=days)

           params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000}

           req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))

          g = connection_handler(self.session, req, self.server.verify_ssl)

    if not g:

    It looks like it could be possible I think, coding is not my forte. Maybe get in contact with the varken dev, or maybe fork it.

  5. 13 hours ago, Ludditus said:

     

    I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container.  Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy.

    I can confirm this. I have tested several times. If you allow varken to create the database in influxdb with a retention policy of 30 days. This is set in the script that installs varken in the container, and can not be changed in the varken.ini. To get around this before installing or starting varken for the first time create the database first. This will give you autogen retention policy.

    • Thanks 1
  6. I don't have an hourly chart, I do however have a daily chart. I'm using 170 MB per day but divide that by 24 hours I am only writing 7.1 MB per hour. I only have the telegraf docker running on the server InfluxDB and Grafana are running on a Proxmox server with docker running in a LXC container. I also have my appdata stored on a separate SSD not part of my cache. The only docker component on my cache drive is the system directory that contains the docker.img. I would guess that is the writing to the cache pool.

    • Like 1
  7. 5 hours ago, Mr_Jay84 said:

    I also have the issue of none of the HD's showing under Parity, Array, Cache, SSD, Unassigned Drives. The do show under the Disk Overview windows though.

    I had this issue too. It was my fault I misconfigured my telegraf.conf. Under the [[inputs.diskio]] section make sure this following line is there and/or uncommented.

    device_tags = ["ID_SERIAL"]

    I mistakenly commented it out in my conf file and it was a real PITA to figure it out.

    Hope that helps.

    • Like 1
  8. 2 hours ago, SimonF said:

    What information are you missing, I get smart info for my SAS drives under 6.9.1

     

    Which drives are you using?

    I am using Seagate ST2000NM0023 drives. They are older and I have a couple newer drives in my test server. I ordered on the way for testing. The drive identity says Available - device has SMART capability. I included a screenshot of the self-test section along with the smart report.

    Smart.png

    ST2000NM0023_Z1X1AD7B_35000c50057a953f7-20210406-1756.txt

  9. On 1/4/2021 at 7:35 PM, alexdodd said:

    Is it still the case, it seems to be unless its another issue, that smartmontools and the unraid front end GUI still dont play nice with SAS drives? 

     

    I downgraded last time, but i've bitten the bullet and moved forward to 6.9 so I can use cache pools.  Now I'm SMART-less.  Unless I manually run everything, which isn't ideal i'm sure i'm not the only person missing SMART on the webgui? Although it might be my fault, i might have set something else somewhere.

    It must still be the case. I cant get smart data from SAS drives either. Maybe for 6.9.2 @limetech could be upgraded to a version with bug fixed or downgraded to the version from 6.8.2 that worked. I do get probably 99% of unraid users don't use SAS drives.

  10. 7 hours ago, falconexe said:

     

    @skaterpunk0187

     

    Thanks for the info. I have yet to dive into Unraid.Net. Can you explain exactly how the API is used with this plugin? Anything I can leverage or add from this news into UUD 1.7? Any links or official documentation? Thanks!

    The unraid.net connects your unraid server with the site. It gives you access to download your key auto backup of the USB drive (unencrypted for now) and remote access (requires port forwarding). Also if you have more then one unraid server it makes it easy to switch back and forth between them.  I don't think there would be anything to integrate with 1.7 but not positive on that.

  11. 4 minutes ago, MarkusMcNugen said:

    Only other thing I can think of would be to check the CrushFTP.log file and see if it lists some kind of error when trying to read the plugin. I use letsencrypt for HTTPS with the swag container but purchase certs for other things. You can get some super cheap certs these days, ssls.com is where I buy mine.

     

    https://www.ssls.com/ssl-certificates/comodo-positivessl

    I checked the log no plugin error. That will work, I used to pay for them to use with my Synology when I used that but I couldn't remember where I purchased them from. I don't mind paying I just didnt want to pay an arm and leg for a cert to use myself.

  12. 24 minutes ago, MarkusMcNugen said:

    Did you use the correct plugin file, maybe you grabbed the CrushFTP 9 version? I copied the CrushFTP 10 version to the plugin directory, restarted the container and it's there. 😉

     

    CrushFTP LetsEncrypt Plugin:

    https://www.crushftp.com/crush10wiki/attach/LetsEncrypt plugin/LetsEncrypt.jar

    I thought I used the CrushFTP 10 version the first time but I used the link and tried that and it still doesn't show in the plugin list. I even tried removing and removing the appdata directory as well and still not there. I guess when I get some time ill look into the reverse proxy since it will allow for other passthrough as well.  Thanks for looking into it.

  13. I'd like to say I'm really liking the CrushFTP container. I have been looking for something like this since I switched to unraid.

    I would really like to use the LetsEncrypt plugin. I have downloaded it and copied it to my /*/appdata/crushftp/plugins but I can not get it to load, I have tried restarting and stopping and starting the container and nothing. I'm assuming its a java issue since it says it doesn't work with java 9 & 10. Maybe a feature request to add for an update or maybe I'm going something wrong. Thanks

  14. 12 minutes ago, falconexe said:

    Nice. FYI you can give your SSD panels unique names to differentiate them by changing the title. So you can tell which SSD drive is which in your Cache Pool.

    I did know I could changes the names I just haven't yet.

    I added the growth snapshot back. I did try editing it but the "field (total)" through me off I was looking for specifics like "tv shows" or "shows" that I should change.

     

  15. 25 minutes ago, falconexe said:

    And it goes without saying, you guys will do whatever you want with your Plex Library stats panels. I'm adding all this stuff as a foundation for your to build upon and to provide example code. If any of you have suggestions, I'd be happy to add them. Also, I would love to see how you guys have adapted my UUD for your personal use.

    I added SSD lifetime Writes, Reads, and Used for both NVME drives in my cache pool and the same for my octane drive as my second cache pool (can't wait foe @limetech to add multi array pools).

    I don't have a Documentary and Anime directories for Plex so I removed them and added the TV Episodes and Movie Week, Month, and Year panels from 1.5. I also removed the Plex Library Growth snapshot since it showed added TV shows and not episodes since I don't use the *dars to add to my collection.

    Plex Data.png

    SSD Life.png

  16. 25 minutes ago, falconexe said:


    Check your Telegraf docker log for errors. Something is off. Everyone should be able to get drives to populate if the correct Telegraf plugins are installed, enabled in the Telegraf config, and the correct datasource is present.

     

     

    Here is my inputs.diskio config (I assume that's the plugin) for UUD to get drive info. The telegraf log has a smart error for a drive sitting in server in a precleared state, I dont think that is the issue. Everything seems to work all the smart stuff the array growth works. I haven's checked every panel but from what I've seen it will only mess up the "SSD Writes" panels.

    Diskio.png

  17. 5 minutes ago, falconexe said:


    Heads up, this is not going to work for the UUD as intended since this query modification you made uses the “name”. The UUD panels are designed for serial numbers. Furthermore, the “SD*” names change at boot.

     

    If the 1.6 drive queries are not working for some people (no idea why that would be...) then please download 1.5 and use that query.

    I should have known that was too easy. Yes I know SD* naming can change but they are pretty static unless boot order is changed in BIOS/UEFI or the hardware physically changed like a new bay on a backplane or plugged into a different SATA port. Serial would be better to use 100%. 1.5 also never listed drives for me either.

  18. 5 minutes ago, Orbsa said:

    I see the variable list reload, but they still all show up as empty

    This wasn't showing up for me either I changed the Query and Regex in all the disk variables. Pic shown but to copy and paste

     

    Qurey: SHOW TAG VALUES FROM "diskio" WITH KEY = "name"

    Regex: (?!sd.1|md|loop|nvme0n\dp.*).*$

     

    The Regex isn't needed is will just remove all the partitions of drives and the array mounted drives, just gives a pit less to choose from.

    Hope it works for others

    Disk.png

  19. 1 hour ago, falconexe said:

    For those who are having issues with the UNRAID API even showing data (on its own web page), please ensure you log in with "root" and that password. Please also pay attention to the HTTPS checkbox. If you get this wrong, it will not work! Give it a few minutes. If you are still not having success, try stopping and restarting the docker. If that does not work, completely blow away the docker AND APP DATA folder for the docker, and try again with "root" and the correct level of security (checkbox).

     

    If all else fails, please report the NON UUD issue to the topic forum that handles that Docker.

    I could not find anything about this. I spent the last hour on this, I mirrored my server port in my switch and ran tcpdump. It turns out if you are using the unraid.net plugin with remote access enabled it disables IP direct connections with a DNS hackery with a random string .unraid.net. This blocks the UNRAID-API from connecting to unraid with an IP and UNRAID-API is not capable (from what I can tell) of using a FQDN as a connection. Side note it just doesn't use DNS lookup. Also UNRAID-API or unraid itself seems to have an issue with "Use SSL/TLS" set to auto (or at lease for me) but works 100% if that setting is set to yes or now and proper settings are used in UNRAID-API. Soon as I signed out and removed the unraid.net plugin the API worked just fine. Hopefully this will help others. 

    And Awesome work @falconexe with 1.6.

    • Thanks 1
  20. I'm sure it's me not configuring my telegraf.conf file properly but I can not get my disks to show up in the variables. I thought this was normal till I seen the screen shots of the new 1.6 variable section. All my drives show under the Array Disk Storage and SMART data sections on the dashboard. Off hand does anyone know what I have to uncomment to fix.  Thank you