Jump to content

Frank1940

Members
  • Posts

    9,982
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Frank1940

  1. Once you spin all the drives up to copy a one large file, it does not make sense to switch modes. Remember, unRAID is copying one file at a time. Only the computer on the other end has any inking whether this is a batch copy of 10,000 files or a copy of a single 45GB file. Plus, I have never had a cache drive on either of my two servers. I have no idea what the real-world performance is using one. I can't believe that there is not performance hit for the overhead of file creation and space allocation.
  2. Today, unRAID can use two methods to write data to the array. The first (and the default) way to read both the parity and the data disks to see what is currently stored there. If then looks at the new data to be written and calculates what the parity data has to be. It then writes both the new parity data and the data. This method requires that you have to access the same sector twice. Once for the read and once for the write. The second method is to spin up all of the data disks and read the data store on all of the data disks except for the disk on which the data is to be stored. This information and the new data is used to calculate what the parity information is to be. Then the data and parity are written to the data and parity disks. This method turns out to be faster because of the latency of having to wait for the same read head to get into position twice for the default method verses only once for the second method. (The reader should understand that, for different disks, all disk operations essentially happen independently and will be in parallel.) For purposes of discussion, let's call this method turbo write and the default method normal write. It has been known for a long time that the normal write method speeds are approximately half of the read speeds. And some users have long felt that an increase in write speed was both desirable and/or necessary. The first attempt to address this issue was the cache drive. The cache drive was an parity unprotected drive that all writes were made to and then the data would be transferred to the protected array at a later time. This was often done overnight or some other period when usage of the array would be at a minimum. This addressed the write speed issue but at the expense of another hard disk and the fact that the data was unprotected for some period of time. Somewhere along the way LimeTech made some changes and introduced the turbo write feature. It can be turn on with the following command: mdcmd set md_write_method 1 and restored to the default (normal write) mode with this one: mdcmd set md_write_method 0 One could activate the turbo write by inserting the command into the 'go' file (which sometimes requires a sleep command to allow the array to start before its execution). A second alternative was to actually type the proper command on the CLI. Beginning with version 6.2, the ability to select which method was to be used was included in the GUI. (To find it, go to 'Settings'>>'Disk Settings' and look at the "Tunable (md_write_method):" dropdown list. The 'Auto' and 'read/modify/write' are the normal write (and the default) mode. The 'reconstruct write' is the turbo write mode. This makes quite easy to select and change which of the write methods are used. Now that we have some background on the situation let's look at some of the more practical aspects of the two methods. I though the first place to start was to have a comparison of the actual write speeds in a real world environment. Since I have a Test Bed server (more complete spec's in my signature) that is running 6.2 b19 with a dual parity, I decided to use this server for my tests. If you look at the specs for this server, you will find that it has a 6GB of RAM. This is considerably in excess of what unRAID requires and the 64 bit version of unRAID uses all of the unused memory as a cache for writing to the array. What will happen is that unRAID will accept data from the source (i.e., your copy operation) as fast as you can transfer it. It will start the write process and if the data is arriving faster than it can be written to the array, it will buffer it to the RAM cache until the RAM cache is filled. At that point, unRAID will throttle the data rate down to match the actual array write speed. (But the RAM cache will be kept filled up with new data as soon as the older data is written.) When your copy is finished on your end and the transmission of data stops, you may thinks that the write is finished but it really isn't until the RAM cache has been emptied and the data is safely stored on the data and parity disks. There are very few programs which can detect when this event has occurred as most users assume that the copy is finished when it hands the last of the data off to the network. One program which will wait to report that the copy task is finished is ImgBurn. ImgBurn is a very old freeware program that was developed but in the very early days when CD burners were first introduced back in the days when a 80386-16Mhz processor was the state of the art! (The early CD burners had no on-board buffer and would make a 'coffee cup coaster' if one made a mouse click anywhere on the screen!) The core of the CD writing portion of the software was done in Assembly Language and even today the entire program is only 2.6MB in size! It is fast and has an absolute minimum of overhead when it is doing its thing! As it runs, it does built a log file of the steps to collect much useful data. I decided to make the first test the generation of a BluRay ISO on the server from a BluRay rip folder on my Win7 computer. Oh, I also forgot! Another complication of looking at data is what is meant by abbreviations K, M and G --- 1000 or 1024. I have decided to report mine as 1000 as it makes the calculations easier when I use actual file sizes. I picked a BluRay folder (movie only) that was 20.89GB in size. I spun down the drives for all of my testing before I started the write so the times include the spin-up time of the drives in all cases. I should also point out that all of the tests, the data was written to an XFS formatted disk. (I am not sure what effect of using a reiserfs formatted disk might have had.) Here are the results: Normal Time 7:20 Ave 49.75MB/s Max 122.01MB/s Turbo Time 4.01 Ave 90.83MB/s Max 124.14MB/s Wow, impressive gain. Looks like a no brainier to use turbo write. But remember this was a single file with one file table entry and allocation of disk space. It is the best case scenario. A test which might be more indicative of a typical transfer was need and what I decided to use was the 'MyDocuments' folder on my Win7 computer. Now what to copy it with? I have TeraCopy on my computer but I always had the feeling that it was really a shell (with a few bells and whistles) for the standard Windows Explorer copy routine which probably uses the DOS copy command as its underpinnings. Plus, I was also aware that it Windows explorer doesn't provide any meaningful stats and, furthermore, it just terminates as soon as it passes off the last of the data. This means that I had to use a stop watch technique to measure the time. Not ideal but let's see what happens. First for the statistics of what we are going to do: Total size of the transfer: 14,790,116,238 Bytes Number of Files: 5,858 Number of Folders: 808 As you can probably see this size of transfer will overflow the RAM cache and should give a feel for what effect the file creation overhead has on performance. So here are the results using the standard Windows Explorer copy routines and a stopwatch. Normal Time 6:45 Ave 36.52MB/s Turbo Time 5:30 Ave 44.81MB/s Not near as impressive. But, first, let me point out, I don't when exactly when the data finished up writing to the array. I only know when it finished the transfer to the network. But it typical to what the user will see when doing a transfer. Second thing, I had no feeling about how any overhead in Windows Explorer was contributing to the results. So I decided to try another test. I copied that ISO file (21,890,138,112 Bytes) that I had made with ImgBurn back to the Windows PC. Now, I used Windows Explorer to copy it back to the server using both modes. (Remember the time recorded was when the Windows Explorer copy popup disappeared.) Here are the results: Normal Time 6:37 Ave 55.14MB/s Turbo Time 5:17 Ave 69.05MB/s After looking at these results, I decided to see how Teracopy would perform in copying over the 'MyDocuments' folder. I turned off the 'verify-after-write' option in Teracopy so I go just measure the copy time. (Teracopy also provides a timer which meant I didn't have to stopwatch the operation.) Normal Time 6:08 Ave 40.19MB/s Turbo Time 6:10 Ave 39.98MB/s This test confirmed what I had always felt about Teracopy in that it has a considerable amount of overhead in its operation and this apparently reduced its effective data transfer rate below the write speed of even the normal write of unRAID! When I look at all of the results, I can say that turbo write is faster that the normal write in many cases. But the results are not always as dramatic as one might hope. There are many more factors determining what the actual transfer speed will be than just the raw disk writing speeds. There is software overhead on both ends of the transfer and this overhead will impact the results. During these tests, I discovered a number of other things. First, the power usage of a modern HD is about 3.5W when spun up with no R/W activity and about 4W with R/W activity. (It appears that moving the R/W head does require some power!) It has been suggested that one reason not to use turbo write is that it results in an increase in energy. Some have said that using a cache drive is justified over using turbo write for that reason alone. If you looking at it from an economical standpoint, how many hours of writing activity would you have to have to be saving money to justify buying and installing a cache disk? I have the feeling that folks with small number of data disks would be much better off with Turbo Write rather than installing a cache drive just to get higher write speeds. For those folks using VM's and Dockers which are storing their configuration data, they could now opt for a small (and cheaper) SSD rather than a larger one with space for the cache operation. Thus folks with (say) eight of few drives would probably be better served by using turbo write than a large spinning cache drive. (And if an SSD could handle their caching needs, the energy saved with a cache drive over using turbo write would be virtually insignificant. When you get to large arrays with (say) more than fifteen drives than a spinning cache drive begins to make a bit of sense from an energy consideration. A second observation is that the speed gains with Turbo Write is not as great with transfers involving large number of files and folders. The overhead required on the server to create the required directories and allocate file space has a dramatic impact on performance! The largest impact is on very large files and even then this impact can be diminish by installing large amounts of RAM because of unRAID's usage of unused RAM to cache writes. I suspect many users might be well served to install more RAM than any other single action to achieve faster transfer speeds! With the price of RAM these days and the fact that a lot of new boards will allow installation of what was once an unthinkable quality of RAM. With 64GB of RAM on board and a Gb network, you can save an entire 50GB BluRay ISO to your server and never run out of RAM cache during the transfer. (Actually, 32GB of RAM might be enough to achieve this.) That you give you a write speed faster above 110MB/s! As I have attempted to point out, you do have a number of options to get an increase in write speeds to your unRAID server. The quickest and cheapest to simply enable the turbo write option. Beyond that are memory upgrades and a cache drive. You have to evaluate each one and decide which way you want to go. I have no doubt that I have missed something along the way and some of you will have some other thoughts and ideas. Some may even wish to do some additional testing to give a bit more insight into the various possible solutions. Let's hear from you…
  3. Because '65,xxx' is exactly 64K defined as a decimal number --- which is 64 x 1024 = 65536 (K is defined as 1024 --- 2^10) I think you probably need to run memtst which you can find as an option during the unRAID boot process. Since you have 64 of RAM, you should run it for 24 hours minimum. Did you get RAM recommended by the MB manufacturer? Are all of the memory chips from the same manufacturer? I have heard that as you install more and more memory strips on a MB, problems often surface with timing and other memory related issues. You may have been bitten by one of them.
  4. But this should work, indifferent what is set for startup! Sure, but I'm just noting what works By the way, on my servers, it takes the better part of a minute to start the array because all of the drives have to be spun up before the start array procedure can be completed. The various steps are listed at the bottom left of the Array Operation page in a very small font until the process is completed and the array mounted. Are you seeing those steps?
  5. Did you click on the 'Start' button on the 'Array Operation' Page? (Under 'Settings', ' Disk Settings', the first option is to 'Enable Auto Start'. The default is 'No'. Change it to 'Yes'.) EDIT: The 'Enable Auto Start' option is used to automatically start the array when the server is started. If that option is set to 'No', the array must be manually started from the 'Main', 'Array Operation' tab.
  6. Go to 'Tools', 'Diagnostics' and post the file with your next post. Also a screen shot of the 'Main' (array Devices) page could be useful...
  7. Oh double crap!!! I forgot about the issues that I had withe the RealTek NIC! My trials and tribulations are detailed in this Thread: http://lime-technology.com/forum/index.php?topic=39350.0 I actually did some testing for LimeTech (That I can't discuss) and the basic conclusion was that a Intel NIC MAY be required for MB's with a RealTek NIC and a low power CPU. There are now Intel NIC's in both of my servers (just updated my profile) and I haven't had any issues since sometime back in the April 2015 time frame. (In the interest of complete disclosure, I have not tried the RealTek NIC for many months. It may well be that the RealTek issue have been addressed by the people who maintain the Linux Kernel.)
  8. Apparently, it 'worked' in most cases. However, if something goes wrong, the LimeTech people may/will be the only ones who can really help you out. Remember, it is a shell program that is doing the job and it is making assumptions about how your system is configured. If your system fits those assumptions, all should go well. (The other updating instruction sets are intended to avoid problems that people were having when manually updating their systems.) In any case, MAKE A BACKUP OF YOUR CURRENT FLASH DRIVE!!!!! Those instructions are in both of the instruction sets that I provided links to. Then if something goes terribly wrong, you can get back easily to your current setup and use the manual method to upgrade from ver 5 to ver 6. (In the manual method, you do need some of the configuration files on your current version5 setup so that backup is very important!) I almost forgot one thing. You should have a minimum of 2GB of RAM for version 6. Four GB of RAM will give run about any combination of plugins and Dockers.
  9. In your case, these are a few of the obvious one: Much improved GUI. Built-in APCUPSD support E-mail Notifications Better support for plugins In the future, you will find that user-based support in this forum will decline as more and more people convert to version 6.X. That alone should provide you with all the reason to convert. For information on making the conversion, see these two resources: http://lime-technology.com/forum/index.php?topic=39032.0 and http://lime-technology.com/wiki/index.php/Upgrading_to_UnRAID_v6 The first guide works well for those people who have simple setups, (such as you) and the second one covers virtually very problem anyone has ever encountered in making the conversion.
  10. Why not set up a VM on the new server and pass the second NIC port on the MB to that VM and set up a PLEX server on the VM. You could still have a PLEX docker running which would be using the first NIC port. Thus, you would have two PLEX servers in the same box each with an assigned NIC.
  11. Well, the first thing you would need would be a special Cat5E cable with the leads cross-connected so that the data out pair(s) of one computer becomes the data-in pair(s) on the other. (This was sometimes done for a two computer network back in ancient times when hubs were very expensive.) You also need to assign static IP addresses for these two ports. I also have the same question as trurl...
  12. My experience has been that 1080P HD streams seem to present their set of unusual problems when steaming. Even 720P steams are much more forgiving. And the higher the data rate in those 1080P streams, the worst the problems are. (With the VLC player, I have issues streaming 1080P mp4 video and none with 720P mp4 video across the network.) You seem to have isolated the problem to your network. Now the problem is to try to figure out how to eliminate the problem. I would probably start by using the 100' cat5E cable to feed the netgear switch in the living room. Then replace that switch if the cable isn't the problem.
  13. Since you seem to have a lot of stuff laying around, I have a suggestion of one more thing to try. Check to see if you have a old switch laying around-- even a 10/100 one will do. (100Mbps is more than adequate for any bluRay material!) Connect only the server and the Dune to it. Then connect the uplink port of that switch to the router so that it will have DHCP service. Is the stuttering condition still there? Did its behavior change in any way? I should tell you that two of my three Media boxes (Netgear 550) are connected to 10/100 switches at their representative media center (Too many audio/video devices require internet connections these days!) and there is a GB switch on that floor which distributes the signal to those two switches. On another floor is the main 16 port GB switch (where the servers are located) which is connected to a 10/100 router to provide the Internet connection and DHCP service. So there are up to three layers of switches involved in my network and I don't have stuttering issues at this time. I have had switches and routers go flakey over the years that have caused some unusual problems...
  14. Is it doing it only on material with BluRay (1080P HD) data rates or does it also stutter on DVD material? (My problems were limited to BluRay ISO's and high data rates made it much worst...)
  15. My suspicions is that about the time I found the issue, some changes made to the way the kernel was optimized for better VM and Docker performance that impacted the RealTek driver performance for systems with low performance CPU's. This has never been really verified as I was the only one to extensive investigate it. As you probably found, I 'cured' the problem with an Intel Network Card... By the way, you will probably find that you will now get a Network Receive 'Drops:' errors on the Dashboard. (Current count on my Media server after seven days of uptime is over 59,000,000!) They do not cause any issue or problem that I can detect. These Drops did NOT occur with the RealTek driver.
  16. You might want to read through this thread: https://lime-technology.com/forum/index.php?topic=39350.0
  17. What do you see under Workgroup settings? Maybe a screen shot. Hello, I see nothing additional at all, just the standard options UPDATE: Tried one more reinstall and it's working, albeit I see no icon in the header, only in the SMB settings. I think it only displays in the header if your server IS the local master. Haven't tried it lately since I have given that duty to my ASUS router. This is the case. My Media server is setup to to be able to always win the 'election' and it has the symbol displayed. Obviously, my Test Bed server is not the Local Master and it has no symbol!
  18. I am pleading a bit of ignorance at this point. Aren't these cores all on the same chip and cooled by the same CPU cooler? Is there really that much difference in temperatures of the various cores that the small difference would effect the safety/operation of the system?
  19. List up all of your hardware and, perhaps, someone will have a method by which you can get access to the unRAID console via an attached keyboard and monitor. I assume that you must have those two items as you are using a a couple of VM's. Another suggestion would be to have a friend with a laptop who would be willing to come over to your place for the evening. I believe some tablets can even be used to access the GUI.
  20. You need to get the Diagnostics file and attach to to your next post. You can do this from the "Tools" tab in the GUI and then click on "Diagnostics". You would also be helpful if you would list all of the plugins and Dockers that you have installed. I suspect that one of them is NOT shutting down and is preventing the array from stopping. If this is occurring, the Limetech powerdown routine will hang and not be able to shut the server down.
  21. Google is your friend. A quick search found this site: http://www.binarytides.com/linux-command-check-memory-usage/ As I read it, you have quite a bit of overhead available.
  22. One quick thing. Double checked that you do not have an ad blocker running or some other software/browser-setting to prevent pop-ups. You may have to whitelist your unRAID server.
  23. Unfortunately, the model is not available in the USA so the number of people who can respond are going to be limited to those areas of the world that run on 220-240VAC. (The BR700 is the smallest one available here.) Apparently the BR series has been around since about 2011 and I don't recall seeing anyone reporting incompatibility issues with them. (Remember that APC wrote the basic Linux software package that unRAID uses to control UPS's!) My biggest concern that I see with your choice is the size. 330W seems a bit low for a server application. It might be large enough now but if you decide to increase the number of HD's you could run into problems.
  24. Did you look under 'Settings', 'SMB', 'Workgroup Settings'? Are you running 6.1.7?
  25. Same here. It works properly now that I've enabled it again though. I had the a problem with the pplugin not desplaying the Monitor local master election: Current elected master: Settings lines in the "Setting" "SMB" "Workgroup tab. uninstalling and reinstalling the plugin fixed that issue. But the icon is still not showing up in the Plugin page.
×
×
  • Create New...