bkastner

Members
  • Posts

    1198
  • Joined

  • Last visited

Everything posted by bkastner

  1. As a quick suggestion... for those who want to maintain historical checks as long as you have notifications turned on you should be emailed the results of parity checks. While I agree it's nicer to have it all stored in UnRAID I can go back through months of parity check results via the email route, so it's not such a huge deal. It already does this doesn't it? My monthly parity check was still on 6.1.4 and I got an email when it finishedEvent: unRAID Parity check Subject: Notice [uNSERVER] - Parity check finished (0 errors) Description: Duration: 14 hours, 50 minutes, 40 seconds. Average speed: 112.3 MB/sec Importance: normal Yes, that is what I was suggesting. For everyone who wants to keep track of old parity check stats, they should still have them in their email provided they have notifications on. It's what I've used for the last several months to keep track of parity check times between different UnRAID versions. This may be true but everyone wants the shiny new play toy. "Psssh you still check email logs... I just click a button" Trust me, I am looking forward to this too. I am just saying it's not like we didn't have anything previously.
  2. As a quick suggestion... for those who want to maintain historical checks as long as you have notifications turned on you should be emailed the results of parity checks. While I agree it's nicer to have it all stored in UnRAID I can go back through months of parity check results via the email route, so it's not such a huge deal. It already does this doesn't it? My monthly parity check was still on 6.1.4 and I got an email when it finishedEvent: unRAID Parity check Subject: Notice [uNSERVER] - Parity check finished (0 errors) Description: Duration: 14 hours, 50 minutes, 40 seconds. Average speed: 112.3 MB/sec Importance: normal Yes, that is what I was suggesting. For everyone who wants to keep track of old parity check stats, they should still have them in their email provided they have notifications on. It's what I've used for the last several months to keep track of parity check times between different UnRAID versions.
  3. As a quick suggestion... for those who want to maintain historical checks as long as you have notifications turned on you should be emailed the results of parity checks. While I agree it's nicer to have it all stored in UnRAID I can go back through months of parity check results via the email route, so it's not such a huge deal.
  4. Since upgrading to 6.1.4 I've run 2 parity checks. Once just using the default settings, and one setting the md_sync_thresh setting to -1 from sync_window. I am also including my Nov 1 parity check results for comparison from 6.1.3. This is using 3 M1015 cards: 6.1.3 (Nov 1): Subject: Notice [CYDSTORAGE] - Parity check finished (0 errors) Description: Duration: 18 hours, 1 minute, 11 seconds. Average speed: 92.5 MB/sec Importance: normal 6.1.4 (no thresh set): Subject: Notice [CYDSTORAGE] - Parity check finished (0 errors) Description: Duration: 18 hours, 18 minutes, 27 seconds. Average speed: 91.1 MB/sec Importance: normal 6.1.4 (thresh set): Subject: Notice [CYDSTORAGE] - Parity check finished (0 errors) Description: Duration: 17 hours, 39 minutes, 43 seconds. Average speed: 94.4 MB/sec Importance: normal So, 6.1.4 seemed to slow me down a bit, but setting the thresh value saved me an hour on the parity check which isn't bad.
  5. I have the same issue, but just hit refresh on the browser which gets things moving again. It's been happening for a few months, but I just automatically refresh to fix so didn't really think about it.
  6. I do have Plex Home configured, but maybe don't have the user configured correctly on the Android device. I will have to look into that again and see if this is easier than I was thinking. Thanks for the link and the suggestions.
  7. I switched to H310s just because of that reason. Also I wanted a few that had ports out the back instead of the top like the M1015 for my N54Ls and N40L micro servers. The above mentioned H200 appears to be Dell's version of the M1015 (looks like it based on pictures anyway) and would likely be cheaper on ebay than getting an M1015. The port location is a huge consideration. I had 0.5m sff-8087 cables which were great with the SAS2LP cards, but when I installed the M1015 cards they were too short, so I had to buy 1.0m sff-8087 cables, which was an unexpected expense (Especially since I bought 6 to cover all 6 backplanes on the Norco 4224). If you are swapping out the cards consider cable length before you buy new cards. I sure wish I had.
  8. Thanks for the replies. The VM option was what I was figured would be the end option. I do have Plex home running (I think - when i click on users I see 'My Home'. Not sure how else to confirm this). The issue is that I am still the primary user, so with my daughters account I can switch to her on the Plex server and it works, but on the Android device her libraries show up as shared, not local. She is only 5, so I want to cut down as many steps as I can - especially since the remote doesn't make it incredibly easy. I will likely just setup another PMS server in a VM and run that way. Thanks for the suggestions.
  9. I have Plex running in a docker on UnRAID and have my primary users, and a secondary user for my daughter, and have tagged a bunch of tv/movies for her. I am in the process of setting up a TV for her and an Android box to run Plex, however the app is not as good as some of the others. Because I am logging in as my daughter the media shares show up as Shared Media, and the home page is blank when I log into Plex. Ideally I want to get it to a point that when Plex launches she can see the Movies and TV libraries. My thought was if I configured another Plex Media Server under her name then when she logged into the Plex app it would be her personal libraries - not shared. So, I basically have two questions: 1) Has anyone successfully done this? If so, any caveats to consider? 2) Can anyone suggest of a better approach? I am going to be wireless to the device (at least for now), so Plex works way better than Kodi would in this instance. Any suggestions would be appreciated.
  10. I seem to remember having to set something in the SAS2LP BIOS - I think it's turning off INT13 variable. You need to do this on both cards from what I remember.
  11. I think that helps a lot. Spending $110 on an aged CPU is probably not worth it, when I can probably pick up a new motherboard / CPU with a "relatively" recent i5 or i7 and achieve much higher results at a relatively modest price difference. For reference, cpubenchmark is always the first place I go when looking at a CPU. It helps you understand if an extra $20 investment in a CPU will give you a good bump in performance, or just isn't worth the money. It also helps when you have a fixed budget to figure out what CPU makes the most sense (especially when there are often different socket options at any given time). You will also see a number of people on these forums mention passmark score. It's a pretty common reference for everyone. Lastly, as a rule of thumb, if you are looking at Plex at all, you will want around 2000 passmark per concurrent stream you plan on running.
  12. Any time you are looking at a CPU upgrade it's worth checking out: http://www.cpubenchmark.net/ This will allow you to see the passmark score of the prospective CPU vs your existing. So, looking your scenario: Celeron 430: 491 E8400: 2179 Q9650: 4268 So, your Celeron is very low end (no surprise), but the Q9650 is a good jump from the E8400 (basically double the performance).
  13. When you say "Parity never spin down" I am assuming you mean that on the parity disk you've set the spin down delay to never, correct? I am not sure where else this could be set, but since your phrasing is different I just wanted to confirm you are doing the same thing I did.
  14. I experienced the same symptoms. In my case it seemed like problems encountered during the parity check (drives red-balling, SAS/SATA resets, etc) caused an issue where the correct parity values were not written, and the same sectors would come up each time a parity check was run. In my case moving all drives onto the SAS2LP and ensuring they were spun up ahead of time finally eliminated the red-balls and got me a clean parity check, but that workaround hasn't helped everyone. Not sure if it's applicable here, but I had issued with parity errors during parity checks consistently until I set my parity drive to never spin down. I have no clue why this fixed things, but it did. This was also with me having all my drives on SAS2LP cards.
  15. While this may be true, it would also be nice to see some direction from LT on this. People are randomly trying things to find the cause of their issue, but having LT provide a framework of what people can be doing that would actually be useful to LT to help diagnose would be beneficial for everyone. There are obviously a number of people reporting issues (either SAS2LP or other). We need someone to point them in a common direction so that hopefully the findings can be applied to a new build (using current or newer kernel). That sort of direction can only come from LT. Agree. There are, for example, folks buying new SATA controllers to replace SAS2LP's, even though the SAS2LP's are excellent controllers ... but have this sudden issue with 6.1.3. It would be nice to know if Limetech has had any contact with SuperMicro regarding this to see if there may be a solution forthcoming. And while my 31% increase in parity check time between 5.0.6 and 6.1.3 was really disappointing, it wasn't as bad as the user who had a 69% increase with a CPU that has 33% more "horsepower" than mine !! It would be really nice to know if there are a few parameters that could be adjusted to help with this [e.g. is there a "hidden" parameter that sets the priority of the parity check task ??]. I am one of those who swapped the card out for a couple M1015s, but this was due to my SAS2LP card crashing, and causing disks to fall offline. I've actually taken a 10MB/sec hit on parity checks by doing so, but I trust my infrastructure is more stable, which is way more important to me.
  16. While this may be true, it would also be nice to see some direction from LT on this. People are randomly trying things to find the cause of their issue, but having LT provide a framework of what people can be doing that would actually be useful to LT to help diagnose would be beneficial for everyone. There are obviously a number of people reporting issues (either SAS2LP or other). We need someone to point them in a common direction so that hopefully the findings can be applied to a new build (using current or newer kernel). That sort of direction can only come from LT.
  17. Yes, we get royally screwed up here in Canada, don't we. We don't get the same deals, but at this point, between the value of our dollar and shipping across border, and voiding a warranty I fail to see any real point of buying things like this.
  18. I still use the powerdown package with v6. The main reason is one of my VM's (my Mac OS X VM) doesn't recognize the shutdown command that unRAID sends when you shutdown the array. This causes my system to do a unclean shutdown resulting in a parity check starting once the system gets turned back on. I ended up creating a S00.sh script to force shutdown that VM as a band-aid until I can figure out a better method or until OS X supports KVM commands (wishful thinking) Also I like that fact that the powerdown plugin saves my logs to my flash drive. I use the plugin as well for the saved logs feature. I am not sure if I need it for other reasons, but install out of habit as part of server builds.
  19. In version 6 there is really no need for UnMenu (at least that I can see). The options available in the standard GUI, and with the Dynamix addins far exceed with UnMenu is able to provide (and is way more visually appealing). Also, with move to Dockers instead of plugins it's eliminated that benefit that UnMenu used to provide there as well. I was a die-hard UnMenu fan in version 5, and even the early version 6 betas, however as the version 6 capabilities grew I tried moving away and never looked back. I think this is true for many users. For those on version 5 there is still no better option than UnMenu. For those moving to version 6, I would suggest working with the options natively available, and see if there is anything you miss from UnMenu. Chances are there won't be, but if there is, then I would suggest submitting a request for the Dynamix team to add. Overall I think you will be pleasantly surprised as what you can do with just UnRAID today - I know I certainly was.
  20. Current progress: Total size: 4 TB Elapsed time: 3 hours, 19 minutes Current position: 1,84 TB (46,1 %) Estimated speed: 145,1 MB/sec Estimated finish: 4 hours, 7 minutes Sync errors detected: 0 i'm on FW version 18 - flashed it 2.5 years ago and it's working so no need for change i think.. Looking at your sig, I see you have a RAID 0 array as your parity. I am wondering if that is giving you the better than expected results. I am sure it would have a positive impact, I just don't know how much.
  21. I have M1015 cards in my setup too, just started a parity check, after 9 minutes the results are: Total size: 4 TB Elapsed time: 9 minutes Current position: 83,6 GB (2,1 %) Estimated speed: 162,3 MB/sec Estimated finish: 6 hours, 42 minutes Sync errors detected: 0 see all hardware details in my sig below. i'm on unRAID v6B15 - did not upgrade further cos i'm on esxi with USB drops issue with V6 final.. Wow... I would be interested in seeing the final results once the parity check is complete. That is definitely way higher than I start, though I usually start around 120MB/sec, but end up at 82MB/sec. I do see I have 2 drives with the 750GB platters, which I know are not ideal and I will try and replace down the road, but it would be great to get somewhere near where you are. For you and dikkiedirk, what firmware are you on? I am on P20 as it looked stable after the fix, but I am curious if this makes a difference at all (as well as the UnRAID version). It may be worth trying to go back to a beta build and see what happens. Can anyone confirm, if I am on 6.1.2 is there any potential issue with me moving back to a beta build for testing? I only have my one production box and don't want to risk my data at all.
  22. The same tunables results for me happen when there is a bus bottleneck or it’s hitting the maximum speed of the slowest disk in all settings. Are all your WD30EZRX disks 1TB/platter your do you have some 750GB/platter? If all are 1TB/platter parity check speed should start at 145/150MB/s, if you have at least one 750GB/platter parity should start at about 125/130 Mb/s. But this can’t explain that according to what you posted in the SAS2LP thread, your starting speed is higher than the SAS2LP and in the end the average speed is lower. The parity check peaks around 125MB/sec, so it must include the 750GB/platters. Of course, I wasn't smart enough to keep the tunables report from the SAS2LP cards the last time I ran them, but there were much larger variations with those cards, and it had me applying much higher values for the various attributes. The tunables report I ran for the M1015 cards says the following: These settings will consume 136MB of RAM on your hardware. This is -422MB less than your current utilization of 558MB. So, you can imagine how much higher the settings were with my last config. It's definitely bizarre, and somewhat frustrating and I am not sure what to try next.
  23. I recently swapped out my SAS2LP cards for several M1015 cards, which I flashed to firmware 20. However the parity check performance sucks. I was getting 95MB/sec with the SAS2LP cards, but I am not getting 82MB/sec with the M1015 cards. I re-ran tunables which had every setting identical (118.2 MB/sec). I am somewhat surprised that I've lost performance with these cards. I expected them to be at least as fast as my SAS2LP cards if not faster. Does anyone have any suggestions? It's by no means the end of the world, but a bit annoying to say the least.
  24. Lucky me to be running the opposite of everyone else. I did buy them stock, and then flash them with firmware 20. It looks like even with the new settings I am going to be sub-90MB/sec: Total size: 6 TB Elapsed time: 9 hours, 26 minutes Current position: 3.22 TB (53.7 %) Estimated speed: 83.6 MB/sec Estimated finish: 9 hours, 14 minutes I am confused and somewhat frustrated. Story of my life...