Jump to content
We're Hiring! Full Stack Developer ×

grumpy

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

grumpy's Achievements

Noob

Noob (1/14)

2

Reputation

2

Community Answers

  1. There is no session _value from Varken to Influx, unless that is an Influx built-in. But that is moat as @falconexe is not using session_id for these queries any way. He is using player_state. hash_id = hashit(f'{session.session_id}{session.session_key}{session.username}{session.full_title}') influx_payload.append( { "measurement": "Tautulli", "tags": { "type": "Session", "session_id": session.session_id, "friendly_name": session.friendly_name, "username": session.username, "title": session.full_title, "product": session.product, "platform": platform_name, "product_version": product_version, "quality": quality, "video_decision": video_decision.title(), "transcode_decision": decision.title(), "transcode_hw_decoding": session.transcode_hw_decoding, "transcode_hw_encoding": session.transcode_hw_encoding, "media_type": session.media_type.title(), "audio_codec": session.audio_codec.upper(), "audio_profile": session.audio_profile.upper(), "stream_audio_codec": session.stream_audio_codec.upper(), "quality_profile": session.quality_profile, "progress_percent": session.progress_percent, "region_code": geodata.subdivisions.most_specific.iso_code, "location": location, "full_location": f'{geodata.subdivisions.most_specific.name} - {geodata.city.name}', "latitude": latitude, "longitude": longitude, "player_state": player_state, "device_type": platform_name, "relayed": session.relayed, "secure": session.secure, "server": self.server.id }, "time": now, "fields": { "hash": hash_id } } ) influx_payload.append( { "measurement": "Tautulli", "tags": { "type": "current_stream_stats", "server": self.server.id }, "time": now, "fields": { "stream_count": int(get['stream_count']), "total_bandwidth": int(get['total_bandwidth']), "wan_bandwidth": int(get['wan_bandwidth']), "lan_bandwidth": int(get['lan_bandwidth']), "transcode_streams": int(get['stream_count_transcode']), "direct_play_streams": int(get['stream_count_direct_play']), "direct_streams": int(get['stream_count_direct_stream']) } } ) self.dbmanager.write_points(influx_payload)
  2. Is there a way to limit the display of duplicates in Grafana? I realize it is a new input from Varken but it is the same data over and over. Stream Log (Detailed)
  3. Now after watching the panel for 16 hours, and correcting some issues for myself I have noticed some other issues that go beyond my understanding. Stream Log (Overview) misses some entries. In music playing from Plex it can keep up sometimes, skip a couple of songs, or even skip 10's of songs. Probably not noticed during movies or tv shows; I do not know as I haven't made it that far during testing. Stream Log (Detailed) Keeps up even though there are delays, to be expected Plex -> Tautulli -> Varken -> Influx, same with Current Streams. During Music play percentage is of no use, while the song can have 72% remaining when in fact the next song has started playing. Even though most of my posts seem like complaints I really like to thank @falconexe for all of his hard work, it is functional and visual pleasing. I know I wouldn't have been able to do this on my own accord.
  4. @PJsUnraid I do not believe it is Varken causing the issue, I think it is a timing issue. If you look at the influx database you will see all of plex data being updated constantly; not sure if it is 30sec or not but those metrics you are missing are done hourly. So you no longer see N/A go to the panels that show Total 3 dots on top right drop down - edit now in edit mode for the panel top left side click on query options Relative time change to 62m or 1h top right of page Apply to close panel Now it will show the number in a persistent state till it changes on the next query.
  5. I'm an idiot; ok things I do not know. I needed to add a varken data source connection, not just telegraph now I see Plex stuff; not everything yet, but at least moving forward.
  6. @Jufy111 Thank You, I'm looking forward to your example, as far as the smartmontools. I think it is installed with the docker image post argument; which is how I did it. /bin/sh -c 'apt update && apt install -y smartmontools && apt install -y lm-sensors && telegraf' --user 0 I do get the panels: Drive SMART Health Overview, Drive Life
  7. So now that I have all components running as expected (I think) I have more questions. Did you setup a Varken database in influx as your Dashboard variable tab picture suggests you did. As the guides I followed does not suggest that but to use the Telegraph database. I have no Plex (no media) information showing on my Dashboard but as far as I can tell Telegraph, Taututlli and Varken are all processing the data as seen through the log of each. I do not have IPMI so how do I get that information? (TEMPS) Array Drive, System Temps/Power, CPU, Ramm DIMM, Fan Speeds I use NUTs(slave) for the UPS; the information is shown in Uraid but not in the dashboard. How to get that? SSD Temp, Life, Power On, Lifetime reads/writes, are missing.
  8. My original post was poorly worded, I had Varken installed; but it would not start. I got past that by changing owner of the config.example from root to nobody, why it interfered with the required config.ini beats me. Every time I edit the config and remove some stuff when I restart it replaces the removed stuff. There needs to be an updated How to! Which is true of many contributions.
  9. My experiment was in an array with each disk being a standalone ZFS drive. Hard to do a pool with 3*3TB, 1*2TB, 2*1TB I could do a pool with the original 4*16TB plus the 2*16TB, just not sure of the game plan needed and how to move the original data around. Obviously in place will not work as there would not be enough free HD's for any kind of a real pool.
  10. Current Unraid Server a 10 core Intel® Xeon® W-2155 CPU @ 3.30GHz with 16 GB ram, 4 16TB spinning rust and 4TB SSD cache Moving to: MB X399 AORUS XTREME-CF, AMD Ryzen Threadripper 2950X 16-Core @ 3500 MHz with 32 GB ram, 2 new 16TB spinning rust plus 4 current 16TB spinning rust, current 4TB SSD cache could add another 2TB SSD to cache Both 10GB Only reason of changing hardware is the case of the Dell does not support more drives. Also runs hot and does not support 6.12 Unraid (last I checked). I tried ZFS on the replacement board Cpu and it works; but I'm not impressed. If I run more than one task to drives I get CPU_IOWAIT errors. Caused by memory shortage (1/8) I'm sure as the CPU has room to spare and no other tasks running. So I do not understand why ZFS is so in demand will it really be a benifit or just more headaches. I'm all for cutting edge but do not see the benifit. My usage is media/document server with VM's and Docker containers running 24-7. Is Mover really that slow, 9 hours about; in each direction: to move about 30GB from ZFS SSD to XFS HD, ZFS HD to ZFS SSD Unbalance is slow but way faster then this. Network copy starts of great as expected to array but during the move it eventually times out at 0 bytes from Windows machine. So do I stay with the devil I know XFS spinning rust, BRTFS SSD cache or move to ZFS Full or a Hybrid? I would like to achieve greater transfer speeds more akin to my current Synology which uses BRTFS. If if I move away from XFS how do I go about that?
  11. I have had no luck installing Varken; I think it may be a dead project. Github is 4-6 years old, discord link both at Grafanna and GitHub dead, Wiki 304... Any suggestions?
  12. Are you not using a SSD cache drive for file transfers? You may want to read my quest to get faster writes. Simple answer @JorgeB suggested Turbo Writes and Cache drive makes it livable. Turbo Drive It improves writes and reads. My Post
  13. I do not think your issue is the same as the OP. You are doing a backup with encryption and the highest level of deduplication that Duplicacy can do. I would think if you increased the resources to this container it would speed it up. (no experience) I was amazed how long the new PC I setup took to backup to Synology Active Backup compared to the original PC it was replacing. This was caused by the 90% CPU usage and transfer rates were dismal even though the Synology transfer rates are faster then my Unraid on testing file transfers.
  14. @JorgeB I marked it as a solution as this maybe the best I can expect from Unraid. Synology is still faster but they write to all disks sharing the love evenly. I have no cache in the Synology. Test: 151 GB dir with 10 video files is my test now. Before turbo write MB/s 180 read 75 write After turbo write 230 read 220 write Cached enabled (4 TB SSD) 490 Read 380 Write Iperf-3 7.79 Gbits/second
×
×
  • Create New...