mifronte

Members
  • Posts

    450
  • Joined

  • Last visited

Everything posted by mifronte

  1. Thanks dmacias for the explanation. BTW, excellent work on the plugin. I have not been able to reproduce the invalid server error. I noticed that if I toggle between the manual and auto setting for server on the settings page, my preferred server is sometime on the list and sometime not on the list. My preferred server ID is 6285. I have noticed that speedtest.net in the browser is really susceptible when you are testing gigabit connections. At that bandwidth, things like browser and operating system compatibility starts to alter the results. For example, on Windows 10, Chrome does not appears to be optimized to achieve true gigabit on the speed tests, whereas Microsoft Edge has no problem. On Windows 7, Chrome yields the same results as IE. I was hoping there was a CLI version of speedtest.net that can truly test gigabit connections using sockets to eliminate the browser and HTML from the equation. All test sites that only uses HTML5 cannot correctly test the gigabit speed.
  2. That is what I am doing. But my nearest server would disappear from that list. Then if I pick another server and run a speed test and go back to the settings page, my nearest server would show up on the list again. I would pick it, run a speed test and all would be fine. Until I run another speed test and I get the message invalid server id. If I run the speed test at the command line: 1. Can I specify an actual server id? 2. Can I force it to use sockets just like the current speedtest.net? I have symmetrial gigabit and the plugin version is off by 100-200Mbps.
  3. My preferred server keeps on disappearing between runs. Is there a way to just filter the server lists down to a zipcode or to manually enter in a server ID using the plugin? I have not read enough to look into the CLI option.
  4. Great feedback. I like the 5yr warranty on the Blacks and usually most of my disks stays spun down. When they are in use (spun up) I prefer them to be as fast as possible. Noise and heat is not an issue since the server is in the mechanical room in the basement where it is always cool.
  5. Are WD Blacks good for unRAID? These are supposed to be WD's best drives and so if the price per TB is reasonable compared to the Reds or Seagates, would you use them? Currently, for the 5TB the Blacks are $38/TB and the Reds are $34/TB. That's only a $20 difference.
  6. Finally had a chance to tried johnnie.black's suggestion of moving all my Samsung HD203WI drives off of the motherboard's SATA ports and unto my AOC-SAT2-MV8 ports. The parity check speed significantly increased to where it used to be prior to V6.1.x.
  7. I will look into moving the Samsung drives to the AOC-SAT2-MV8 controllers. Since my Norco 4224 has hot swap bays, can I just swap the drives positions or do I also need to reconfigure unRAID? Does unRAID cares which controller and ports the drives are connected to?
  8. I am using two AOC-SAT2-MV8 (Marvell 88SX6081) plus the onboard SATA ports on the Supermicro X7SBE motherboard. I believe all of which are running in ACHI mode. Is my only option is to get new controllers? I don't believe there is an HBA mode for the motherboard or the AOC-SAT2-MV8 controllers.
  9. I don't use Plex, but I do have an older HDHomeRun (HDHR3-US) tuner on my network. I run a Windows 7 VM in unRAID and use Windows Media Center to record shows in WMC native format wtv. I then use MCEBuddy to covert the wtv files to MKV without any processing except commercial removal. MCEBuddy will also move the converted file to my unRAID share where I have access to it from any devices on the network. The resulting mkv file contains the original video in whatever format that WMC records (usually H.264). You can also choose for MCEBuddy to transcode (I believe it uses Handbrake) to other formats.
  10. I may have an opportunity to purchase a HP Proliant DL360 Gen9 server that currently has a HP MSA 2040 SAN Storage (24 2.5" bays) attached. I may not purchase the MSA 2040 SAN Storage since it requires 2.5 SAS drives, which are too expensive for unRAID usage. However, if I purchase the HP Proliant DL360 Gen9 server, can someone tell me how I would go about adding 3.5" SATA SAN storage (if such a thing exists)? Edit: The more I read up on the HP Proliant DL360 Gen9 and MSA 2040 SAN Storage, the more I realized this server may be beyond my abilities since I have never used SAN technologies. It sounds like a lot of special equipment may be needed like special cables and controllers. I need to read up on SAN to see how everything fits together. This would be an auction purchase and so I would not know the exact configuration of the server until the deal is done. Also with auction, some cables and parts might be missing.
  11. nr_requests made no difference in parity check speed. Looks like if I want faster parity check, I would have to look into replacing those drives and/or upgrading my server hardware.
  12. Are you planning to be running a workstation or server version of Windows? If you plan on using your machine as a server and is only running the workstation version, then one day you may hit a limit on the number of server/client sessions. At least this was the case prior to Windows 10. I have not kept up with the latest Windows server products, but this is why Microsoft has a server and workstation license.
  13. Thanks gary. I will try the nr_requests next to see if I get any performance gain during parity check. It does not seem like the firmware patch had any impact. Starting the parity check with the VM running slowed the parity check down to 35 MB/s. Once I stopped the VM, the parity check speed went back up to 55 MB/s. So you are right that the CPU is having a harder time on the parity check with V6.1.x. In summary: With the VM running and all else being equal, I think I get 25 MB/s with the md_sync_thresh set to md_sync_window/2 and 35 MB/s when it is set to md_sync_window - 1. With the VM stopped, my parity check hovers around 55 MB/s. Now tonight I will give the nr_requests suggestion a try to see if the parity check speed improves. I guess this just gives me more incentive to start a new unRAID build. I had already vow to stop buying 2 TB drives and look into 4TB or 6TB drives. However, now that my CPU is being taxed by V6.1.x, it looks like I will have to look into a new system. The SuperMicro X10SRH-CF looks interesting. Doesn't seem too many mentions on the forum though.
  14. I am reporting the speed that unRAID reports at the end of the parity check. I assumed unRAID is reporting the average speed. Anyhow, the duration is 9 1/2 hours faster than that first parity check (19 1/2 hours). It could all be related to not having the VM running and that is why I still need to start a parity check with the VM running to get a better idea as to what caused the parity check speed to improve. I believe my second parity check was on the old firmware with the VM stopped. That run looked like it was the same as the first run. I did not let that second parity check complete and canceled it 10 hours into the check with over 9 hours remaining.
  15. I have been playing with the md_num_stripes, md_sync_wind, and md_sync_thresh in trying to get my parity check speed back to the level of V4 & V5 (about 75MB/s) compared to the atrocious 25 MB/s of V6.1.x. It appears that the parity speed varies very little regardless of what values I used for the tunables. I see turning off my Windows 7 VM has more impact, but not significant (maybe 10MB/s faster), since that leaves me more CPU headroom for the parity check. Here are the values I tried for the tunables: Tunable Default Suggested md_num_stripes 1280 3584 md_sync_window 384 1536 md_sync_thresh 383 1535 The semi good news is that I have managed to get my parity speed check into the 55MB/s range. Not great, but definitely a vast improvement from 25MB/s. I was only able to improve the parity check speed after patching the firmware of the three Samsumg HD203WI drives from version 1AN10002 to 1AN10003. I have two more tests to conduct to see if the speed increase is related to the patched firmware. The first test is to start the parity check with the VM running (like my first initial parity check where the speed was 25MB/s). The second test is to play with the nr_requests parameters. I am now back to the default for md_num_stripes, md_sync_window, and md_sync_thresh since changing these values had very little impact, but added more load on the CPU and memory.
  16. Thanks for all the suggestions. Page Update Frequency was already disabled. The CPU load was only 45% during the parity check with the VM running. The VM is pretty much idle since it is a Windows 7 VM for DVR tasks. I have also set the tunable md_sync_thresh to md_sync_window - 1 from the default of md_sync_window/2 at the 60% mark of the parity check. It did not seem to immediately have any effects, but toward the 98% mark, my parity check speed went up to 45 MB/s from 26 MB/s. Does the md_sync_thresh tunable takes effect immediately or is it on the next parity check? Looks like johnnie.black has tried a lot of different things and I may have to accept defeat. My only hope is that I may be running a different configuration and so the tunables may help me.
  17. Don't have any settings for md_sync_threshold. I even searched the disk.cfg and the settings is not present. edit: just issue the following command during my parity check: mdcmd set md_sync_thresh 383 So since I did not have it set, I assumed the value of md_sync_window/2 was being used. I have now revert back to the old settting of md_sync_window - 1 (or 383 with the default md_sync_window value of 384). I sure hope this brings my parity check back up. Otherwise the parity check is useless unless I can find a window of over 24 hours to run parity without having to write to my array.
  18. Here are my tunable settings: poll_attributes = 600 md_num_stripes = 1280 md_sync_window = 384 I have 20 data disks + parity + 2 caches (in pool). Unassigned disks = 4 I don't understand what "default value of 1/2 or N-1" means.
  19. Would the SAMSUNG_HD204UI also be affected? Since they were working fine prior to V6.1.x, would this be considered a bug to report so that LT can provide a fix?
  20. I upgraded to V6.1.7 from V6.1.6. Now my parity check is only abot 27MB/s. It should normally be about 75MB/s compared to V5 and V6.0.x. Sorry, I was not on any V6.1.x long enough to remember actual parity check speed. I am running one Windows 7 Pro VM and the only additional pluggin I have is the Nerd Tools and Cache Directories. All of which has been there since I upgraded from V5 to V6. Any suggestions where to look to see why parity check is too slow? Edit: Added syslog attachment. beanstalk-syslog-20160203-1023.zip
  21. I may have just experienced this too for the first time on my Windows 10 laptop. Accessing the shares or the webgui failed and then it started working a minute or two afterward.
  22. Is your unRAID set as the local master and do you keep it running all the time? Windows naming broadcast relies on one machine being a master, and if the master is not on all the time, then the other machines will fight it out to be the master. That may explains why it works some of the time. If you don't keep you unRAID on all the time, then you may look into running a local DNS.
  23. Successfully updated to v6.1.0 from V6.0.5 using the webgui plugin last night after clicking Check for Updates. Windows 7 Pro VM, cache_dirs, and NERD plugins all seems to be working. I like the ability to disable disk sharing at the Global Share Settings. I wished it was there prior to V6.1.0 to save me the trouble of having to disable it in each disk share setting.
  24. I just upgraded to unRAID V6.1.0 and added the link command for mdcmd. However I have two problems: 1. unMenu is unaware of my devices sdaa and sdab. It looks like it never accounted for devices to go beyond sdz? 2. I am still receiving these errors, maybe related to #1? Sep 3 21:53:44 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:53:44 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:53:44 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:53:44 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:54:15 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:54:15 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:54:15 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory Sep 3 21:54:15 Beanstalk unmenu[7783]: cat: /sys/block/loo/stat: No such file or directory This is the message at the top of myMain: Couldn't find drivedb[loop0] Couldn't find drivedb[loop1]
  25. So this will be my 3rd time posting a bug for the cache_dirs plugin and I hope this time I am posting in the correct thread. I just installed the cache_dirs v1.6.9 plugin for unRAID V6.0.1. When I try to include 4 directories and press Apply, the first directory gets unchecked/dropped. So the result is that cached_dirs only caches 3 out of the four. My directories are named as follows: Asia, Movies, PBN, Shows To include the first directory, Asia, I have to use the User Defined options of -i Asia.