JimPhreak

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by JimPhreak

  1. Well the reason I'm asking is that the two drives I have attached to my on board SATA ports don't show up in my unRAID VM though I did confirm the are detected in the BIOS. Not that I run ESXi, but presumably you'd also have to pass the mobo ports through also. Yea I figured that but can't do that since my ESXi boot drive is attached to the onboard controller.
  2. Well the reason I'm asking is that the two drives I have attached to my on board SATA ports don't show up in my unRAID VM though I did confirm the are detected in the BIOS.
  3. Is there any way to add disks to my unRAID array that are attached to my motherboard (SuperMicro X10SDV-TLN4F) SATA ports in addition to the disks I have attached to my PCI passed through HBA (M1015)? I'm just wondering if I can get more than 8 disks to show up in my unRAID VM or without buying a SAS expander.
  4. Is there any way to cancel preclears that have been started from the plugin?
  5. Did you install the faster script? No. I wasn't sure if I needed both or just need to replace the original with the faster one and rename it?
  6. I noticed in the plugin description it says "This plugin is a parser for Joe L. excelent Preclear Disk Script. It also include a fast post-read verify option, courtesy of bjp999." I don't see it as a selectable option so I assume all preclears dont with this plugin will include the faster post-read verify option?
  7. Yes, the -n will result in just doing the clear ... skipping the pre- and post- read cycles, which account for about 3/4 of the time of a pre-clear cycle. So the clear will be MUCH faster. Is there anyway to stop a preclear after 1-2 cycles if it is scheduled to run 3 and still get a report on the first 1-2 or am I stuck waiting till all 3 cycles finish if I want a report?
  8. It really doesn't matter either way. The only thing you'd have to do twice with regard to upgrading to v6 and then doing the move is to make sure you assign the drives to their corresponding roles (parity, disk1, etc.) because you have to do that after you do a fresh install and you'll have to do it again once you switch motherboards. Other than that it makes no difference. Actually, unRAID should recognize your drives just fine after a motherboard change. As long as you keep the same flash with the same config/super.dat you shouldn't need to assign anything. I've changed everything, motherboard, cpu, psu, case, no problems. So disk assignments are saved to the config and are not based on what ports on the motherboard/HBA they are plugged into? That's great to know. Thanks for the clarification. That will make it easy for me to move my server into a new chassis without having to worry about identifying which drive is which (ie. parity) after the move.
  9. It really doesn't matter either way. The only thing you'd have to do twice with regard to upgrading to v6 and then doing the move is to make sure you assign the drives to their corresponding roles (parity, disk1, etc.) because you have to do that after you do a fresh install and you'll have to do it again once you switch motherboards. Other than that it makes no difference.
  10. In the past I've used the following command to clear multiple disks at once: ./preclear_bjp.sh -r 65536 -w 65536 -b 2000 -A -c 3 /dev/sdX This is what I'd run for brand new disks I want to check for integrity. So for disks I already trust, I can just run the following? ./preclear_bjp.sh -r 65536 -w 65536 -b 2000 -A -n /dev/sdX
  11. Well the plan was not to add the 3TB drives back to the array until after I've copied all my data to the new config (with just the 4 x 8TB drives) in case something went wrong with those 4 new drives, I still have all my data on all 8 x 3TB drives. So by the time I was adding the 3TB drives to the new config, I would indeed have parity protection. But now that I think about it, it's probably an unnecessary precaution and I might as well just add all 8 drives (4 new, 4 old) to the new array at once after I preclear the 4 new 8TB drives (yes testing these for integrity).
  12. Oh nice, so you're an unRAID newbie? You're going to love it once you get accustomed to using Dockers.
  13. Any time you try to add disks to a parity protected array then you need to pre-clear them to avoid the array being offline while unRAID clears them. Note that this requirement would not apply if you do not yet have parity protection in place (e.g. on initial setup or after a New Config). Ahh, that makes sense now. Thanks for clearing that up for me.
  14. New drives come in today. So just to be clear on what's the fastest way to get this server fully operational with all 8 disks (4 x 8TB + 4 x 3TB) I'm going to do the following: [*]Replace 4 x 3TB drives with 4 x 8TB drives [*]Preclear the 4 new drives (3 cycles with -n option) [*]Do a New Config and assign ONLY 4 new drives [*]Do a parity sync and then a parity check [*]Copy all data from main unRAID to newly built backup unRAID [*]Preclear remaining 4 x 3TB drives and then re-add them to the array Is that correct? The only part I'm confused about is why I need to pre-clear the 4 x 3TB drives that I already have been using. Can't I just format them to erase the data since I already pre-cleared them when I first installed them to be sure they are good?
  15. So can anyone comment on the idea of purchasing one of the linked SSD's in the OP to add to my current cache pool?
  16. This is probably only what you need. The issue is, the spin down has to be temporarily disabled while this test is running. Until there's an API for that, it's kind of hard. I had started on some sort of dd of single random blocks, but stopped as the real way to fix this is turn off the spin down timer temporarily. The short test is easy and finishes in minutes, but it's not really comprehensive enough. Perhaps the spin down logic can inspect the smart data and if a test is being executed, skip the spin down until the test is no longer active. Oh so you're saying that currently if you are running an extended test that the disk will still spin down if it's not being accessed otherwise? Last I remember, yes. A spin down is issued and it aborts the SMART test. This may have been changed, but I may have missed it. As far as short vs extended. If we do a monthly parity check on the 27th, and do an extended test 1 drive a day, starting with disk1 - diskn where each day of the month is the disk number, this could be done nicely. At least that's how I planned to do it. That's two full sweeps of each disk a month. Another idea is to schedule a smart extended test for all drives on the 27th day and parity check on the 28th day. Sounds good but currently there is no automated way to do this correct? You just have to remember and then manually do it.
  17. This is probably only what you need. The issue is, the spin down has to be temporarily disabled while this test is running. Until there's an API for that, it's kind of hard. I had started on some sort of dd of single random blocks, but stopped as the real way to fix this is turn off the spin down timer temporarily. The short test is easy and finishes in minutes, but it's not really comprehensive enough. Perhaps the spin down logic can inspect the smart data and if a test is being executed, skip the spin down until the test is no longer active. Oh so you're saying that currently if you are running an extended test that the disk will still spin down if it's not being accessed otherwise?
  18. Does/has anyone setup scheduled Smart tests for their array drives on any kind of regular basis and if so how are you doing so? I was thinking it would be nice to do a short smart test once a week and an extended once a month.
  19. Thank you for clearly explaining all of that, it really clears things up for me. So in my case OpenELEC is of no use but it's nice to know what it can do.
  20. Did you read the guide here? As long as you're doing an upgrade and not a clean install your data will remain. If your moving your drives to a different system/motherboard you'll just want to take a screen shot of the devices page before doing the move so that you can determine which drives (by serial #) are assigned to which roles (parity, disk 1, disk 2, etc.) as you'll have to re-assign them accordingly once you upgrade your unRAID USB drive to version 6. Also, since it appears you are running a version new enough to do an upgrade, you can just follow these shortened instructions here.
  21. Upgrading from version 5 to version 6 was very easy when I did it in the spring. But just review this thread which should answer any of your questions. The hardware you're using will have no affect on the upgrade process. http://lime-technology.com/forum/index.php?topic=40952.0
  22. It's for most devices I believe. I use it within PlexWeb, my Android phone and my gf's iPad. I know some of my users use it on Roku. For example if I'm casting from my android phone to my Chromecast it allows me to find specific spots in shows/movies as I drag the playbar along. It really is helpful in my opinion and now that I'm used to it I'd really miss it if I turned the feature off.
  23. Yea I'm going to give GPU passthrough a shot at some point. As for Plex, I've got 1650 movies and over 13,000 TV episodes. I'm not sure how large that is considered. I do also have "generate thumbnails" enabled on both those libraries which is what takes up most of the space.