darckhart

Members
  • Posts

    44
  • Joined

  • Last visited

Everything posted by darckhart

  1. Is there a way (in some setting maybe) to make sure this plugin obeys the same update check procedure as other Unraid Plugins? ie, when I want to check updates for all my plugins, I go to the very top menubar and click Plugins, then the Check for Updates button at the top right. But UD appears to decide to check for updates itself outside of this. And then it adds this notification banner on the main Dashboard. I only want this unraid box to go out to the internet when I explicitly tell it to. Am I missing a configuration somewhere? Thanks!
  2. Thank you! When I clicked the docs link in the OP I scanned and didn't see a JF specific FAQ and didn't even think to check the general unraid one! I'll remember that for future!
  3. I haven't updated since v10.7.2 and updated last night to v10.7.5 apparently. Unfortunately, JF has broken direct stream/direct play. Is there any way to rollback this container?
  4. just an update: ran a parity check over this weekend. same issue (throughput is around 65 MB/s) even with the CPU MHz uncapped, mitigations off, and pstate set to performance. seems like downgrading may be the only option now.
  5. Thanks. I'll give it a try this weekend. If method 1 works permanently, I guess that will be good since I won't have to downgrade.
  6. Interesting link! Thanks so much for the find! I think it is making a difference. The first /proc/cpuinfo command shows me 480 MHz. After the second command and then check CPU again, it now shows one core boosted up to 2.2 GHz. (I'm doing a SHA2 hash.) I don't understand your last post though: "add that line to the go file." What is that and how do I do it? Also, in case I need to do it, how do I downgrade? I found instructions for how to go back to the previously used one (but for me that would be 6.9.0-b25) but I need to go further back. How is that done?
  7. Thanks for both those advice. I will report back after trying the downgrade. (might be a while since I want to let the check complete.)
  8. I was directed to this thread from one I started recently in General Support here. Lots of my config info, etc, are posted there. In brief, I have an Intel J3160 cpu with 16 GB ram with array of 6x 6TB 7200 rpm data drives with two parity (2x 6TB) - all the same model HGST Deskstars. I also have 2x Samsung 850 EVO 1 TB for cache pool. I have not changed any hardware since the beginning. I am also experiencing VERY DRASTIC SLOWDOWN. I can't tell when I upgraded to each newer version of UnraidOS but I am guessing from my Parity History that it parallels what other users here have seen. I started with 160 MB/s checking speed dropping to about 85 MB/s and now on 6.9.0-b30, I am seeing a paltry 23.5 MB/s. After testing with diskspeed, all disks and controllers report throughput 100-200 MB/s. I DO have a wimpy CPU also which, after reading all posts, sounds like it is the particular piece of hardware that is not taking it well. There are some outlier 'high throughput speed' entries on my parity check log, but I am attributing that to the way the log seems to calcuate since I pause my check during the day (too hot) and restart again at night. parity-history.txt
  9. Yes, that is correct. There have been no hardware configuration changes. I have always had two parity. Thanks very much for pointing out that issue thread. I will add to it. I am a little worried about downgrading. Might it break compatibility with things?
  10. Really appreciate your continued help troubleshooting this! Good to know it is not disk related. Your highlighting of CPU is interesting. However, I have NOT added nor changed any hardware since the beginning: same CPU, same drives and number of drives. Additionally, it is not a gradual decrease in throughput over time; it appears to be huge drops. See attached throughput history. In the beginning, it was blazing fast like 160 MB/s. Then somewhere along the way it dropped to 85 MB/s. (I believe the super high outlier rates are an artefact of how the table is generated. I don't think it takes into account that I pause the check, so when it resumes and there's a teeny bit left to check, it might appear way faster than it should be?) I am trying to guess (with no knowledge really about how all this works) what might be influencing throughput. 1. We seem to have concluded that it's not disk related nor controller related. However, when looking at the output of that terminal command, the speeds looked odd to me. They are all such similar numbers that it made me wonder if there was something acting to cap it. 2. Is it software related? ie, I thought maybe the linux kernel updates with mitigations for Intel might be a factor, but we checked that too and doesn't seem so. Has maybe Parity Check function itself changed in the UnraidOS updates? 3. I would think also that as the Parity drive fills up with Parity data, it would take longer to get through it all. However, that shouldn't influence throughput correct? parity-history.txt
  11. Looks like Disabling Mitigation did not affect anything. I am still seeing checking throughput at about 23 MB/s. I attached two images. One is for Parity 1 and Disk 4, the other is Parity 2 and Disk 2. Controller throughput looks fine to me? Here is output of the terminal command. Seems like something is capping it? Average: DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util Average: loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdc 33.53 22534.29 0.00 672.00 1.06 31.66 2.93 9.83 Average: sdd 33.53 22534.29 0.00 672.00 1.03 30.58 2.28 7.65 Average: sde 33.49 22507.43 0.00 672.00 1.21 36.14 7.50 25.11 Average: sdf 33.49 22507.43 0.00 672.00 1.23 36.66 7.88 26.39 Average: sdg 33.49 22507.43 0.00 672.00 1.23 36.66 7.78 26.07 Average: sdh 33.49 22507.43 0.00 672.00 1.25 37.19 8.22 27.55 Average: sdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdj 33.53 22534.29 0.00 672.00 1.11 33.11 4.59 15.41 Average: sdl 33.49 22507.43 0.00 672.00 1.13 33.74 5.20 17.41 Average: md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  12. I installed the Disable Mitigation Plugin and the Diskspeed docker. I had to reboot for these to take effect, which means parity check cancelled about 70% in, so I guess I have to start over. Attached here is the results of diskspeed. They are all over 100 MB/s. How do I interpret these results against the parity check throughput? I will have to wait until tonight to restart parity check to see if the cpu disable mitigations has any effect. (Too hot during day.)
  13. Thanks both. I will check into those two things and report back.
  14. btw thanks for your troubleshooting support! The Uassigned Device is a USB hard drive dock so nothing to worry about there. I don't think anything should be interfering when it's checking? I really only have a media server docker (jellyfin) and I turn it off before doing the check. I was wondering (maybe) if the unraid update to new linux kernels invoke some harsh mitigation penalty for the crappy intel cpu and so it is slower? Then again I don't know if any of those operations are even required in parity check. Maybe not?
  15. OK attached! Thanks amphora-diagnostics-20201026-1313.zip
  16. So I upgraded to 6.9.0-beta30 from beta25 a couple days ago. I started parity check last night. Woke up this morning and wanted to see progress and it was not nearly where I thought it would be. Turns out the checking rate is only around 23 MB/s! I try to run a parity check every month or every other month as I remember to, and usually the checking rate is about 85 MB/s (and when things were brand new I swear it was around 160 MB/s). Nothing has changed (hardware) and maybe the Community plugins that I update fairly regularly. SMART status says all my drives are 'healthy' (6x 6 TB, plus 2x 6 TB as parity, all HGST Deskstar 7200 rpm) and 2x 1 TB Samsung 850 SSD as cache. Any ideas? Thanks
  17. When array is started, and no use is detected for some time, all drives spin down. This seems like good behavior. However, when array is stopped, the drives never spin down. I think default behavior should be like previous case: spin down all drives after some period of time. Example: Power outage today while I was away at work. When power came back on, computer back on. Unraid started, but array is stopped. When I got home this evening and found out, all drives had been happily spun up all afternoon in the hot summer weather.
  18. So IDK exactly how hard drives work when there's no reads or writes, but physically there's clearly a difference for when the drives are "spun down" versus "spun up" yes? And while they aren't "doing anything," spun up is always hotter. OK so far all makes sense. But if not "doing anything," then shouldn't they be optimized to reduce power, etc, etc? Just seems like good practice. Even hard drives on my external USB dock seem to go idle after a period of time. Anyway, if all drives are spun up (there's 10x HDDs packed together), they'll start at a relatively normal 28C, and even when "doing nothing," come up to about 33-35C in the course of a couple hours. The Too Hot warning comes on around 50C I think, which I agree, should not have occurred, if the drives were "doing nothing." (So that's something different to look into I suppose. Maybe it WAS doing something.) I add more cooling when I'm doing parity check (run it at night, open windows, turn on fan and point it at the box), but during the day, unless it's a hot day, I'll get complaints like "the fan's too loud and noisy" "why's it on when no one's in that room" etc etc. It's already in a small box without ability to add more cooling because of the complaint "it looks so ugly" "it's too loud" etc etc Like I said, I'm just trying to understand the default behavior since it was not what I thought would happen. And trying to determine if what I thought was supposed to happen even makes sense in the first place. (Like I notice the cache drives never spin down regardless. Which I guess sort of makes sense since they're cache drives. And I store my dockers on there so I don't particuarly want them to be slow to access anyway.)
  19. It wasn't so much a use case as me being dumb and forgetting. I stopped the array after its partiy check bc I wanted to fiddle with the cache pool and then fiddle with my router settings and I didn't want anybody else to be accessing the content on the array while I was doing those things. As you can imagine, getting involved with this and bunch of other distractions, by the time I go back to the main webpage to check things, I have a ton of notifications that all my hard drives are Danger! Too hot! (Summer here is hot.) I think to myself that that doesn't make sense, but sure enough all the drives were all spun up when I was expecting them not to be. So I figure it'd be nice behavior that regardless if the array is enabled or not, the drives should be spun down if no activity after X time.
  20. Thanks for clarifying. Is there any documentation that might explain the reasoning behind this? (searched a bit but is difficult to find) Suppose I can submit a feature request too.
  21. https://wiki.unraid.net/Hardware_Compatibility#Network_Controllers https://www.amazon.com/s?k=pcie+Broadcomm+BCM5751 etc HTH
  22. do you need two separate drives? is there a reason why one drive is 1TB and the other is 4TB? (I mean you obviously plan to dedicate one for one thing and the other for the other thing, but does it have to be so?) I am basically doing what you're doing (dockers and data transfer) but I'm running a single cache pool with multiple drives in it.
  23. I am running v6.9.0-beta1. I want to check if this behavior is normal. From "Main" tab, I clicked stop array (did not power down, nothing else, just stopped array). First thing it does is spin up all the drives in order to stop which seems weird to me but OK. So it does the thinking animation and does whatever it needs to do and finally changes status to "array stopped." Awesome. But all my drives are still spun up. I come back to check hours later, and all the drives are still spun up. If the array is in stop mode, shouldn't the drives all spin down after a period of time? (like it normally does when nothing is being accessed?) Seems weird behavior to me.
  24. I recently updated the app (think it's on v10.5.2 now) and suddenly subtitles don't work anymore. Tried a few different browsers (Safari, Firefox, Chrome) and they're all busted. I played the same file from before update where subs did show, and now after update they don't show. Is this an issue I should file with the JF devs? edit: answered my own question with some searching on github. looks like they fixed the bug in the nightly. will hang tight for next update. https://github.com/jellyfin/jellyfin/issues/2650
  25. OK so I was reading the help portion again, and, being a newb, I think it's a little confusing the way it is written: When creating a new share and presented with the 'use cache disk?' option, here's how I interpret the help messages: 1. No - makes sense. Don't bother using the cache disk at all, so don't touch it. All new files go straight to array. 2. Yes - makes sense. Use the cache disk as intended (copy new files into the cache temporarily, because they will then be moved from cache to array at a scheduled time, emptying out the cache). 3. Only - doesn't makes sense on first read. but makes sense after realizing that it will use the cache disk unlike a cache disk (by making the files reside here and only here permanently). ie, use the cache disk instead of the array. 4. Prefer - doesn't make sense on first read. but makes sense after considering option 3. Here it means prefer to use the cache disk instead of the array as long as space is available on the cache disk.