Jump to content

WeeboTech

Moderators
  • Posts

    9,457
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. Check the RAW smart report. If anything says FAILING NOW or this message shows up. SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. then RMA the drive immediately. Frankly, the prior report stated this, So I would prepare to replace it. If I had an onsite spare, I would be pre clearing it now. There really could be an issue with the PSU, which may be why there was a spin up error posted. Do you have other additional internal drives or just the 4 drives for the micro server?
  2. When this appears, it is cause for alarm. SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. In my experience, SMART doesn't catch every situation, but when it is warning you as this, It's going to be a near death experience. Granted it could be hours, days, months in this state, but in this state, it could go in the next few minutes as well. If the firmware's own diagnostics say something is wrong, you better believe it! LOL!
  3. Maybe bonienl has some input on this. I would have thought the front end to reveal a FAILING NOW attribute.
  4. What version of unRAID are you running. If there is a FAILING NOW attribute, that should surely send off a notification.
  5. There is an unRAID file browser. Maybe that could be altered a bit so that files in a specific directory present the option to edit them.
  6. Why does it matter if the old parity drive is overwritten? If you are using the new config function, parity will typically have to be rebuilt anyway, or at least rechecked, and if you are trying to recover from a red balled drive, new config is absolutely the last thing you want to do. What scenario are you picturing where assigning a parity drive to a data slot will cause data loss? Maybe prevent this ? And not the first or the last. " I've done something very silly today..... I had an old disk in my array that had failed, so I removed it and ran a new config to reduce the array by one drive. I then added the drives, but put mixed up drive 1 and the parity. Once the system started I stopped the parity synch within seconds, but my data cannot be reached." Maybe the wrong place to post this but a Complete, Easy method of Backup would be a great addition to the GUI. You can spend hours trying to implement CrashPlan or reading forum posts of Unraid users and still not get it to run or after hours of work still not be able to backup your Unraid harddrives. While the scenario is common in response to the question, the backup context is off topic and would best be served as it's own topic. There are various solutions, rsync to a secondary server being one. Let's keep this topic to front end gui based subject matter. The backup topic is much larger and needs it's own thread or feature request. I invite you to start one.
  7. Good ideas, I would think the first option fairly easy to do. As for the second option, there might need to be some kind of test option or functionality that does a mount read-only and if mountable, presents 'are you sure, looks like a mountable filesystem exists' This could be something like a mount/umount any time a drive is set to be parity. If the drive was mountable, present some red text. (just tossing ideas out there).
  8. At the top bar after TOOLS there is a graphical bar representing space utilization for the array. There should be additional bars representing space utilization for / and /var/log. If either of these fill up, the array will have issues, therefore it should be a front end, highly visible, display. root@unRAIDb:~# df -vH / /var/log Filesystem Size Used Avail Use% Mounted on - 17G 567M 17G 4% / tmpfs 135M 2.8M 132M 3% /var/log I had suggested this in another thread, but I cannot locate it. Since this one is on WebGui enhancements that could/would enhance/affect all users it seems like a good place to add it.
  9. I had suggested model-serial-yyyymmdd-hhmm.txt All of which are available, it stays the same per model-serial and the human date/time is sortable. Epoch works for me, but then you have to strftime the value to convert it to a human readable format. It's do-able. I even have an strftime command and bash loadable shared library, but it's not necessary since yyyymmdd-hhmm is sort-able. Better yet. let this be configurable with meta characters so people can use whatever date format they want. @H = Host @M = Model, @S = Serial and the rest of the % characters are passed to strftime. fwiw. '%s' translates to the epoch.
  10. I thought this was resolved by some one switching USB drives, ports or a USB 3.0 device. The exact fix escapes me.
  11. You're my Hero !!! Thanks!!!
  12. Here is link on Amazon since Buy.com is out: http://www.amazon.com/Lexar-High-Performance-microSDHC-Mobile-LSDMI16GBSBNA600R/dp/B00BQ5UWXI/ref=sr_1_5?ie=UTF8&qid=1435581490&sr=8-5&keywords=Lexar+High-Performance+MicroSDHC+633x FWIW, I bought one from amazon and Tom said it was not unique. If anyone buys one please post the outcome of your licensing. This is a perfectly sized model as a good licensing mechanism.
  13. The speed of parity checks and rebuilds has more to do with the largest disk, and less to do with the overall capacity. A parity check or rebuild of a single disk with a 48T array of 6+1 8T drives is going to be slower than one of 12+1 4T drives. This is highly dependant on architecture of the motherboard/controllers and drives. However, given that, using the values obtained by smart, chances are a rebuild and/or check will not be much faster then the SMART 'Extended self-test routine recommended polling time' You can blame me, but I'm sure someone would have brought it up. However, food for thought, if Tom thought this was easy to obtain, he might suggest this for the time being. One of the most requested features, so I would have to agree here and here as well. This remains to be seen unless someone has actual experience with raid5 and raid6 numbers on linux. In turbo write mode, writing to a single drive, the numbers will probably be very good. Chances are, if all drives have to be spinning, there could be a penalty. In my 4 drive array with turbo write. If I started to access the other drives in the array simultaneously, write and read speed would drop significantly, Therefore a cache drive would probably be more important. I do not agree. Subdividing can be achieved today with VMs. At the cost of an additional license key. Agreed and possibly at some kind of upgrade fee to make the development effort worthwhile and/or pay any royalties. While I think being able to subdivide arrays into smaller protected units is a great feature, the most requested feature has been multiple parity drives. snapraid has it, flexraid has it. unRAID needs it. What smaller arrays give you is the ability to confine arrays to a 5x3 or a controller. Compartmentalize your data and/or utilize turbo write for high speed writes. I'm enamored with this methodology. In fact, so much, that instead of building one large server, I built multiple microservers. instead of \\unraid\video, \\unraid\music \\unraid\images \\unraid\data \\unraid\backups I went with \\video \\music \\images \\data \\backups When I take down one to upgrade the drives, I do not affect the other. Given all of that, I don't care how it's implemented at this point in time. If a method to reduce number of spinning drives requires royalties, I'll certain contribute to the cause even if I don't plan to use it.
  14. Wow I forgot to add, the HP Microserver can support PMP with the eSATA and the internal SATA. The motherboard has to be flashed with 'the bay's bios, but from what I remember, it worked. You would need a SATA to ESATA low profile cable adapter ~$15 or so. However I would probably just reuse the sans digital card. So you might be able to use the Sansdigital directly on the HP microserver with minimal extra parts. In addition, if need be, with the two PCI slots, you could use two SIL3132 controllers and have decent performance. Set up an eBay follow for the HP microserver, and see where that brings you. It'll be a lot less headaches and can I/what if?
  15. My unit seems to consist of two backplanes, 1 per 4 drives, each having one sata connected to it. These two cables lead to the 2 eSata ports outside the case. What I simply lack the knowledge of is port multiplication. What I thought was that the backplanes were just, for lack of a better term, "splitting" the sata connection into 4. I thought that the card you connected it to had to know how to handle multiple drives, hence support for port multiplication. Am I mistaken? Do those backplanes actually handle the multiplication? Either way, do you think the Sata card I picked out in the original post would work? eSATA operates at 3Gbps => 300MBps. Drives operate at over 150MBs but putting 2 drives should give acceptional perfromance. Use of more than 2 drives will result in unacceptable performace during any parity operation. A maximum of 2 of the 4 ports may be used. There are different usage cases where connecting 4 drives makes sense. unRAID is not one of them. It's not that cut and dry with the ASM 1061, at least during my tests. It could be better in more recent kernels. In my tests, the driver for ASM 1061 was very blocky. When accessing more then one drive on the same cable, it would block the other drive while the buffer was being transferred thus providing really poor performance. The Silicon Image chipsets provided the smoothest multiple drive performance, perhaps due to smaller buffers. Every simultaneous drive accessed would divide simultaneous speed by the amount of drives. 1 - 200MB/s 2 - 100MB~90MB/s 3 - 60 MB/s 4 - 30-40Mb/s These were real tests using dd reads in parallel. You could see that all drives were accessed in sequence, but in decent manner. The ASM1061 would drop 4 drive simultaneous access to 10MB/s. In my test case, You could see that only 1 drive was accessed at a time block ing access to other drives, while the first was being read. As mentioned round robin access of the drives could provide decent performance given the right controller, hardware and drive spread.
  16. I did qualify my thought with, "Given the right circumstances in auction, you can'possibly' score a whole host for the cost of that specialty motherboard". If you look at completed auctions, there were many that finished in the $100 to $200 range. Timing is everything. Around major holidays, End of Oct to Dec, etc, etc. At some point in the last year or so, these were being blown out brand new at $199. (with memory) At that cost, it's a no brainer in my book. The Gen 7 HP Microservers accept 1 x1 and 1 x16 card and have space on top for a 5.25 bay. (given the right icy dock bay you can have a 3.5 and a 2.5 drive up there comfortably) The Gen 8 HP Microservers accept 1 x16 card. you can retrofit a 2.5" drive inside. I'm going to further qualify my thought with, If you do the celeron micro board and mini pci, it's more effort then it's worth and it's going to perform poorly. At least with the microserver, it will have a low-level of performance, but above poor and given the right configuration could do pretty nicely. It's all based on how the drives are configured. If you configure them sequentially on the PMP, performance will suffer. (In any configuration) In addition if parity is on the cable it will be worse. With the micro server, if you configure 1 internal drive, 1 PMP drive on cable 1, 1 PMP drive on cable 2, round robin like that you can get 12 data drives performing at close to regular SATA speeds during a parity check or generate. Put the smallest drives on the PMP and possibly configure the top bay with the fastest SATA drive for parity and the potential for decent performance is there. My experience with PMP is when multiple drives are read/written on the same cable each drive access divides speed. For raid-5, it's not so bad as there are small reads to each drive in sequence. With unRAID it could hamper things with parity on any of those cables. The performance with PMP and parity on the same cable is going to be very poor. The day you have a drive failure and try to rebuild you will be wishing you did not do it this way. What you save in money, will be spent in time. If you can somehow get the parity on it's own SATA channel, you have a chance of this working at a fair level. If you can somehow route the SATA port to an eSATA port and go that route, it's feasible. Then there is the cost of a powered eSATA bay. This could be useful as a backup server mirroring a primary server drive for drive. In that case forgoing parity and using it as a mirror server for backups would be a decent usage case. Frankly, I'm not sure the nano itx board is worth it.
  17. My unit seems to consist of two backplanes, 1 per 4 drives, each having one sata connected to it. These two cables lead to the 2 eSata ports outside the case. What I simply lack the knowledge of is port multiplication. What I thought was that the backplanes were just, for lack of a better term, "splitting" the sata connection into 4. I thought that the card you connected it to had to know how to handle multiple drives, hence support for port multiplication. Am I mistaken? Do those backplanes actually handle the multiplication? Either way, do you think the Sata card I picked out in the original post would work? The Startech ASM 1061 from newegg worked for me. A silicon image provided a smoother response. With port multipliers, the bandwidth is divided amoung the drives. there comes a diminishing return. For reading one drive at a time they are great for providing simpler cabling. It's the parity checks that will slow it all down. If the goal is to do it as project because you can, then so be it. If it's to provide a backup server with some sort of decent usage. The used HP micro server and eSATA PMP card is the way to go. By the time you buy the Mobo and spend the time retrofitting, It could be done at nearly the same cost with a greater benefit. With an additional cost of a trayless icy dock on top you can have a 2.5" ssd and 3.5" removable. Great pre-clearing a drive or using a drive like a floppy for backups. The lil hp microservers make great utility machines.
  18. I think it's not possible. Apparently, the port multiplier is embedded into the SATA backplane. I have the smaller one of these units. You have to really open it up to be sure. In my smaller unit there is the SATA backplane, then there is a daughter board that has the eSATA PMP to SATA breakouts to the back plane.
  19. I agree with Gary (what am I saying ) If you don't need to spin up the entire array, that is a plus, but even if you do, the feature would still be very valuable. I few baby steps toward approaching NetApp and IBM might be in order before closing the door on those options. On the idea of separately protected smaller arrays within a single box, I am lukewarm on that. There were three primary use cases 1 - a two drive failure; 2 - a second drive fails while rebuilding a failed disk; and 3 - an unexplained parity error and we want to triangulate the failed disk. Smaller arrays do not address any other these, although would reduce the likelihood of the 1st and 2nd I suppose. The advantage, I guess, is you could use two different sized parity drives - a small one protecting smaller disks, and a larger one protecting larger disks. But then I think about upsizing a disk - now you need to move it to the other array - it just becomes unnecessarily complicated IMHO. I agree with Gary as well. For arrays with a protected cache drive, it's a small price to pay for spinning up all drives. I presented the idea for smaller arrays as food for thought and garner opinion on the options pro/con. I am just as interested as other parties as to the royalties for the other options. If the royalty/upgrade fee isn't cost prohibitive, this sets unRAID apart from many other solutions.
  20. The original MD driver did this. The RAID-5 driver did this as well. Not really, if disk 1,2,3,4 are protected by parity #1 and disk 5,6,7,8 are protected by parity #2, then mounting disk 1-n the same way as it is done today would provide the user share spanning. FWIW, when doing high speed loads with turbo write, it's almost as fast as a single drive. So for some usage cases, this could prove to be a big benefit for certain sub-arrays.
  21. At the risk of going slightly off topic, I believe a big reason to implement raid-6 is extra redundancy due to a large wide arrays. Is there a possible choice of multiple smaller array pools that are single drive parity protected? Regarding Option 3 with all drives spinning. Although we may loose the advantage of spin down for unused file systems, I'm sure people with a huge number of drives may still find the extra redundancy useful. i.e. a business archiving implementation. In my case, I would not use it. I would rather have smaller manageable protected arrays that are consolidated in visibility. Either on one server with smaller array pools or multiple servers.
  22. I would just as soon pick a higher quality itx board that supports a PCIe card with eSATA. Build it in the lian li Q25B or something similar. You don't want your parity drive on a port multiplier. With a port multiplier your speed is divided by the amount of drives on the cable. A 3GB/s link would be divided by 5 yielding 60MB/s at best. Frankly, I've never received those speeds, more like 20-30MB/s during high activity. You would most likely need to do some sort of round robin so that only 1 drive is accessed at a time on each eSATA cable during a parity check. You can also see about retrofitting the case with SAS/Mini connectors. In that case you can get full speed out of the drives with an LSI SAS card that has external sas connectors. That will probably be easier then retrofitting a case for a motherboard. Using an ITX case as a host with a LSI external SAS card with cables, you can have at least 15 drives at full speed. I probably would avoid the specialty board at the expense of an external host. You can also go the cheap route and score an HP Micro server used somewhere. Add in the LSI SAS card (or go PMP with the ASM1061 card). Given the right circumstances in auction, you can'possibly' score a whole host for the cost of that specialty motherboard Add in the cost of a PCIe controller depending on which way you go. Put parity in the host on the fastest SATA channel. FWIW, I tested the Startech board that has the ASM1061 chipset. It worked for PMP, but not as well as the Silcon image controller.
  23. I just had success with the same reader (I bought it a couple years ago). Was able to remove a USB thumb drive, and activate this via unRAID's new guid process. This also allowed ESXi 6 to use the media without any reset messages. Thanks again Dan! Thanks for the heads up. Perhaps I'll jump aboard and get one of these as well. Although I've been going with the 64GB Sandisk Ultrafits these days. It's mostly due to size. Since I can replace it online without having to wait for someone, this eases my concerns a bit. I still think that dual keys one for licensing one for booting with configuration content is a good solution. The response received from this idea was that it would be a support burden.
  24. Slightly off topic, but food for thought in this case. I've seen usb fans. If the motherboard can supply usb power while in a powered down state, then a usb fan in the case could help exhaust heat while idle. Fringe case, but still viable. I remember years ago there used to be PCI slot fans that would stay spinning for a certain amount of time on power supply standby power.
×
×
  • Create New...