skaterpunk0187

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by skaterpunk0187

  1. I just think xmp is flaky. I have had tons of issues blue screen, random reboots, hard locks, just to name a few on my own and client machines. I've had them run great for weeks to over a year then bam irratice pc behavior. On both windows and linux PCs. Turning off xmp always fixes the problems. You're not trying to get the most fps out of a server anyways it's really not needed.
  2. First off I'd like to say tdarr is awesome and saved me just under 10tb, before I had an issue and my drives decided to loose their partition table and lost everything (not tdarr related). Now I'm using it to transcode as I add media back. I don't have any issues, I have a questions and a scenario. I'm wondering how the output folder option works. Will it follow the directory structure? Like if enabled will it keep the same structure like original file is in /media/tv/show/season/show-S01E01.mkv and using the output folder option will it move it to /new-media/tv/show/season01/show-S01E01.mkv. To be automated to move files to the show name and season same with movies. What I want to do is use the cache drive to move ripped disks to the media shares and have tdarr pick them up from cache (before mover) transcode it then move it to the array. Have tdarr watch /mnt/user/media/tv then move the completed file to /mnt/user0/media/tv. I would like to have plex auto watch the /user0/ to auto add media and not have it watch the /user/. That way Plex doesn't pick up the newly added files before tdarr can work its magic and get metadata and scan for intros for the old files. Most of the time plex doesnt see the file type and size change and I get a playback error and so it doesnt redownload the art and posters and scan all over again to detect intros. <--- or maybe a future feature.
  3. Yeah... I don't understand that. The only thing I can figure is the grafana data source isn't set right or it has an issue pulling data from the database.
  4. I created a telegraf.conf with all info needed for the Ultimate Unraid Dashboard. It only has the sensors needed and none of all the extra. Its much easier to read only having 80 lines instead of 10,000. https://github.com/skaterpunk/UUD/blob/main/telegraf.conf I would say double check your telegraf.conf settings and make sure that telegraf has the network type set to host. If your still not getting only getting none for the host variable, maybe something didn't go right pulling the image and setting up the container. Also check the telegraf logs they may help too. I have two identical Unraid servers my main has no errors in the telegraf log the backup nothing but errors. I even copy and pasted the telegraf.conf file only changing the database name.
  5. You can change it right there in the base url field. That is for the docker icons. Looking at your variables you have none for almost all of them. I'm going to have to say there is an issue with the telegraf config file. It's easy to do and hard to find.
  6. Looking in the Tautulli code for varken it looks like it could be possible. snip it of code def get_historical(self, days=30): influx_payload = [] start_date = date.today() - timedelta(days=days) params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000} req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params)) g = connection_handler(self.session, req, self.server.verify_ssl) if not g: It looks like it could be possible I think, coding is not my forte. Maybe get in contact with the varken dev, or maybe fork it.
  7. I can confirm this. I have tested several times. If you allow varken to create the database in influxdb with a retention policy of 30 days. This is set in the script that installs varken in the container, and can not be changed in the varken.ini. To get around this before installing or starting varken for the first time create the database first. This will give you autogen retention policy.
  8. I don't have an hourly chart, I do however have a daily chart. I'm using 170 MB per day but divide that by 24 hours I am only writing 7.1 MB per hour. I only have the telegraf docker running on the server InfluxDB and Grafana are running on a Proxmox server with docker running in a LXC container. I also have my appdata stored on a separate SSD not part of my cache. The only docker component on my cache drive is the system directory that contains the docker.img. I would guess that is the writing to the cache pool.
  9. @falconexe welcome back! It would be awesome to see updates again. Grats on the marriage.
  10. I had this issue too. It was my fault I misconfigured my telegraf.conf. Under the [[inputs.diskio]] section make sure this following line is there and/or uncommented. device_tags = ["ID_SERIAL"] I mistakenly commented it out in my conf file and it was a real PITA to figure it out. Hope that helps.
  11. I am using Seagate ST2000NM0023 drives. They are older and I have a couple newer drives in my test server. I ordered on the way for testing. The drive identity says Available - device has SMART capability. I included a screenshot of the self-test section along with the smart report. ST2000NM0023_Z1X1AD7B_35000c50057a953f7-20210406-1756.txt
  12. It must still be the case. I cant get smart data from SAS drives either. Maybe for 6.9.2 @limetech could be upgraded to a version with bug fixed or downgraded to the version from 6.8.2 that worked. I do get probably 99% of unraid users don't use SAS drives.
  13. The unraid.net connects your unraid server with the site. It gives you access to download your key auto backup of the USB drive (unencrypted for now) and remote access (requires port forwarding). Also if you have more then one unraid server it makes it easy to switch back and forth between them. I don't think there would be anything to integrate with 1.7 but not positive on that.
  14. Just a little note Unraid.net plugin got updated it works alongside the Unraid API now.
  15. I checked the log no plugin error. That will work, I used to pay for them to use with my Synology when I used that but I couldn't remember where I purchased them from. I don't mind paying I just didnt want to pay an arm and leg for a cert to use myself.
  16. I thought I used the CrushFTP 10 version the first time but I used the link and tried that and it still doesn't show in the plugin list. I even tried removing and removing the appdata directory as well and still not there. I guess when I get some time ill look into the reverse proxy since it will allow for other passthrough as well. Thanks for looking into it.
  17. I'd like to say I'm really liking the CrushFTP container. I have been looking for something like this since I switched to unraid. I would really like to use the LetsEncrypt plugin. I have downloaded it and copied it to my /*/appdata/crushftp/plugins but I can not get it to load, I have tried restarting and stopping and starting the container and nothing. I'm assuming its a java issue since it says it doesn't work with java 9 & 10. Maybe a feature request to add for an update or maybe I'm going something wrong. Thanks
  18. I did know I could changes the names I just haven't yet. I added the growth snapshot back. I did try editing it but the "field (total)" through me off I was looking for specifics like "tv shows" or "shows" that I should change.
  19. I added SSD lifetime Writes, Reads, and Used for both NVME drives in my cache pool and the same for my octane drive as my second cache pool (can't wait foe @limetech to add multi array pools). I don't have a Documentary and Anime directories for Plex so I removed them and added the TV Episodes and Movie Week, Month, and Year panels from 1.5. I also removed the Plex Library Growth snapshot since it showed added TV shows and not episodes since I don't use the *dars to add to my collection.
  20. Here is my inputs.diskio config (I assume that's the plugin) for UUD to get drive info. The telegraf log has a smart error for a drive sitting in server in a precleared state, I dont think that is the issue. Everything seems to work all the smart stuff the array growth works. I haven's checked every panel but from what I've seen it will only mess up the "SSD Writes" panels.
  21. I should have known that was too easy. Yes I know SD* naming can change but they are pretty static unless boot order is changed in BIOS/UEFI or the hardware physically changed like a new bay on a backplane or plugged into a different SATA port. Serial would be better to use 100%. 1.5 also never listed drives for me either.
  22. This wasn't showing up for me either I changed the Query and Regex in all the disk variables. Pic shown but to copy and paste Qurey: SHOW TAG VALUES FROM "diskio" WITH KEY = "name" Regex: (?!sd.1|md|loop|nvme0n\dp.*).*$ The Regex isn't needed is will just remove all the partitions of drives and the array mounted drives, just gives a pit less to choose from. Hope it works for others
  23. I could not find anything about this. I spent the last hour on this, I mirrored my server port in my switch and ran tcpdump. It turns out if you are using the unraid.net plugin with remote access enabled it disables IP direct connections with a DNS hackery with a random string .unraid.net. This blocks the UNRAID-API from connecting to unraid with an IP and UNRAID-API is not capable (from what I can tell) of using a FQDN as a connection. Side note it just doesn't use DNS lookup. Also UNRAID-API or unraid itself seems to have an issue with "Use SSL/TLS" set to auto (or at lease for me) but works 100% if that setting is set to yes or now and proper settings are used in UNRAID-API. Soon as I signed out and removed the unraid.net plugin the API worked just fine. Hopefully this will help others. And Awesome work @falconexe with 1.6.
  24. I'm having the same issue as @Flendor UNRAID-API gives no info on Unraid. Occasionally it will give mover info but it refreshes and goes away within seconds of showing up. Anybody discovered a fix to this? Thanks in advanced.
  25. I'm sure it's me not configuring my telegraf.conf file properly but I can not get my disks to show up in the variables. I thought this was normal till I seen the screen shots of the new 1.6 variable section. All my drives show under the Array Disk Storage and SMART data sections on the dashboard. Off hand does anyone know what I have to uncomment to fix. Thank you