WeeboTech

Moderators
  • Posts

    9457
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. Some food for thought. FWIW, the mover writes it's pid in /var/run/mover.pid. If the gui was active it could grep lines from syslog matching the pid if it were tagged properly. The --progress option has useful information, but since it's designed for a terminal using \r instead of \n, it would not work well being piped into syslog via logger. In addition --progress tells you how many files and where you are in that list but that only works when doing one rsync to move a bunch of files. Currently the mover does a find and executes rsync many times, 1 rsync per 1 file. Changing mover to log to it's own log file and tailing that via the gui might work, but it's not necessary, The mover just needs to be tagged properly in the syslog with -tmover[pid] Also not all that difficult with a co-process in the mover depending on the bash version. I think the real gotcha for telling where you are in the list is the mover would need to be changed to capture how many files will be affected. Then provide some kind of indication of where it was in that list. I.E. a /bin/find capturing the file list, counting it, or iterating through it. It's much more complex then the current bash script that's in place using find and rsync.
  2. I'm with the others on this one. By the time you pay for new motherboard/controller combo it may be years before it pays for itself. Unless you want to reallocate or sell the Gigabyte and i5. That would be the only reason I could see swapping out parts. The cost of cooling is not usually considered, but if the machine is on demand, that does not come into your equation as much. In addition, if you ever want to add hash check-summing to your files, the extra horsepower comes in handy.
  3. Lots of VM's and/or ramdisks. Although not for unraid, In one particular case I'm working on, we are moving the webserver's cached pages to a ramdisk for high speed access. While the SSD subsystem is very fast, we saw we could shave off seconds when accessing the data via RAMFS. Another use is for SQLite to be on a ramdisk. In my own use cases, there was significant improvement over magnetic drives. I could see where huge amounts of ram would be useful for editing large audio and/or video projects in a ramdisk, then saving to final storage. I'm not sure how I personally would use 128GB of it though.
  4. Samba has version control modules, although I do not know much about it. Perhaps this could be used in line with NAS's accelerator drive functionality. The accelerator suggestion was that files created by certain rules are placed in a specific location. Tom suggested at some point, files created that match a rule could have a script or program run against them. Expanding this further, those rules and/or programs/scripts could do double duty of storing on an accelerator drive and/or rsyncing a copy to a backup area. So perhaps we need to lobby for the user share file system to support 'active configurable rules/expressions' that will run commands when matched.
  5. This topic has been moved to Good Deals!. [iurl]http://lime-technology.com/forum/index.php?topic=46368.0[/iurl]
  6. That drive looks fine. Issue is, if it starts to go south, it will happen fast. If that is your parity drive, then I would not worry as much unless you do not trust your data drives. I would suggest a smart long test (disable spin down timer) on your data drives for peace of mind. Also, make sure you are doing at least monthly parity checks.
  7. You could probably do this via USB and utilize rsync to synchronize the desired directories. Doing this to two drives on the same array is do-able, but not recommended. (unless you want to keep dated versions of these directories as backups). In the event of a catastrophe, you want to grab the smallest device with the most critical files. If you really wanted to go with a RAID 1 solution, an areca controller could mirror a specified drive at the hardware level. Then there is the software BTRFS raid1 solution which sort of mimics the drive pool feature. It's my understanding with BTRFS raid1 that no matter how many drives are in a pool, there is guaranteed at least 2 copies of the file. Still with ~150gb of files/directories to backup, an external 2.5 Laptop USB drive seems like a simple solution. If using rsync at the linux level is too much, there's a cool windows program called puresync that can synchronize certain directories when a specific USB drive is plugged in.
  8. Yes, I can probably do this. I'm not entirely sure what I should be looking for when reading the smart report, is there a sticky post or wiki somewhere that explains what to look for? There are plenty of hints around the board. What comes to mind are any pending sectors (which are a timebomb) significant reallocated sectors or an increasing amount, any uncorrectable sectors. High load cycle count. From what I read over 600,000 is cause for concern. When My drive started to fail, there were pending sectors. I quickly replaced it then did some tests. I did a smart long scan and badblocks scan. the pending sectors went away. I put it back in service as parity. A short time later the pending sectors were growing and I took it out of service for a RMA. The smart long scan will read every sector. do a smartctl to capture the logs first, then a smart long scan. (turn off the spin down timer first), then capture a smartctl log for comparison. There's some information here, but not enough. http://lime-technology.com/wiki/index.php/Understanding_SMART_Reports I would suggest scanning the forums for other conversations. If you are running unRAID 6, there should be some intelligence to warn you if there are issues. As I'm still only about 60% utilised, I can't quite justify the leap to 6TB, as much as I'd like to! I think it makes sense to just reallocate it as a parity and wait for prices to drop further, or for me to outgrow my current storage. Considering your usage pattern, it doesn't make sense to get another drive yet. It makes sense for your peace of mind, to replace the parity drive with the newly precleared drive. Am I right in thinking this is achieved by stopping the array, and going to Tools > New Config? The alert was a bit alarming - so just to confirm it wont format my drives, but simply trigger the rebuild of the parity? Any harm in removing the missing 250GB and swapping the parity in one step? Make note of where drives are attached. Capture the screen if you can. Do the new config. Assign all DATA drives first. Do not assign parity yet. Start array. Make sure all drives are mounted. Once you have verified all data drives are where they should be. You can stop the array. Assign the new parity drive Start the array. Let the parity sync occur. Many people make the mistake of accidentally assigning a data drive to the parity slot thus loosing that data drive. Therefore I suggest you check all data slots first before ever assigning a parity drive. I would not remove the 250 gb and assign the data and parity drives in one setting. I usually do it piece meal. New config, assign only the drives I want to keep. start array. verify. Nothing should require formatting at this point in time since nothing is new. when good, stop array add in parity. start array.
  9. FWIW, and I realize no one is all that rich, the Dell E4300,E6400,6410,6510 have an eSATA port and can be purchased used on eBay cheaply. They make good preclear workstation machines. Using one of external bays or whatever you have available with the eSATA port you can use it as a standalone preclear work station. For almost an equivalent amount you might be able to score a used N40L, throw in an ICYDOCK Duoswap on top and use that as a backup for the most critical files along with a preclear station. This is what I did with my older N54L's http://www.icydock.com/goods.php?id=141
  10. If the 3tb has been working fine all this time, check the smart values to determine the urgency on it's replacement. I've had a number of the 3TB drives. 1 failed and showed it via smart. The others are almost 2 years old without issue. You can turn off the spin down timers and issue a smart short and smart long test on the suspect drive after replacing the 250GB or moving the data. I have 3 so I would give it a 33% failure rate so far. Pretty much like the backblaze stats show. https://www.backblaze.com/blog/3tb-hard-drive-failure/ For peace of mind I would start shopping for a replacement after determining urgency on replacing the ST300DM001. If the data on the 250 gb drive is not important or has been moved off to the other drives, I might elect to replace parity earlier, but probably would not. Depends on age of 3TB drive and if reallocated, uncorrectable or pending sectors are showing up. (but that is me). I would probably order and preclear a drive for parity that I can live with for a while. For me, these days I go to the 6tb hgst drives. They get 225MB/s on the outer tracks. YMMV. If the goal is cheapest price per GB available and you don't need the expanded space right now, then put the new drive in service as parity while shopping for an additional drive. As far as removing the missing 250GB drive and replacing old parity with the new drive, I would move data off the 250gb drive with rsync to any spare space on the other drives. Capture a printout or image of the current drive layout, then rebuild the array via new config from scratch utilizing the new drive as parity.
  11. The CPU's can be upgraded. There are links all over the net about people upgrading them. If you have not needed it by now, you probably do not need it on a Microserver. I run uTorrent under XP on 3g of ram, unRAID on 4G of ram, slackware development and CentoOS for a gatekeeper with development as other VM's with ESX/Xeon-1220L and 16GB of ram. I have the startech asmedia esata/sata cards in the PCIe slot. It works under esx, yet if I could get confirmation about the SI-PEX40063 I would switch to those immediately. 2 SSD's internally and 2 eSATA ports, color me interested!
  12. Interesting article. I think what's also missed in a comparison vs unRAID is the drive monitoring we now have. Ease of plugins, events, boot from flash to ram. There's huge benefits of using unRAID.
  13. Excellent update John_M. Anyone know if the SI-PEX40063 is useable/detected under ESX 5.x or 6.x?
  14. The N40L with thebay bios mod allows access to the eSATA port on the back. An external bay on top of the n40l like the startech or something with a fan and esata would suffice. It's also reported that the eSATA port supports port multipliers.
  15. Keep in mind that some people use rsync over ssh to remote servers locally and over the internet (and over a VPN).
  16. Me too ... it would be really depressing to realistically add up everything I spent (but it was clearly a LOT). I too remember MANY very late nights writing code and meticulously entering them into the computer via toggle switches; paper tape; a teletype keyboard; and (finally) an actual keyboard during the decade between '75 and '85 or so. Want to see something more depressing? Tally up all the money you spent on dates or items for the wife, like all those useless knick knacks to decorate the house or the dozens of shoes and purses. You can't take it with you.. a coo and smile from your loved one is priceless !!!
  17. Me too ... it would be really depressing to realistically add up everything I spent (but it was clearly a LOT). I too remember MANY very late nights writing code and meticulously entering them into the computer via toggle switches; paper tape; a teletype keyboard; and (finally) an actual keyboard during the decade between '75 and '85 or so. Thank the heavens I skipped the toggle switches!!! Even hated keypunch programming!!!
  18. 4790K is no slouch either. Not sure I would sell it for a 6700k unless I were wishing to swap out the underlying infrastructure.
  19. Agree. You'll never miss the extra $$ once you've spent it, but you may very well wish you had more "horsepower", more memory; more storage; or a higher-end graphics card as your usage evolves. As I noted earlier, the 6700k is a good choice if you're SURE you won't want to run multiple high-end VM's, but a Socket 2011v3 setup will have much higher PCIe bus bandwidth; significantly better memory bandwidth; and more physical cores [e.g. the 5820k has 6 cores supporting 12 thread with hyperthreading]. ... and while it's easy to upgrade a video card, or add more storage, it's not so simply to replace the basic infrastructure [CPU, motherboard, and memory] => so think carefully before making that choice. This is a great choice if running multiple windows instances with their own video cards each. Lots of cache, lots of lanes.
  20. personally, I would probably choose the i7-6700k. Not only is hyperthreading available, but the cache is larger. It's also a top clock speed. 4 full cores at 4ghz would be great for anything you are virtualizing. For virtualizing, I always choose a CPU with the largest cache and fastest clock speed that is affordable. The 5820k has more cores and more cache, but that may not come into play unless you plan on having 3-4 virtual machines running at the same time and require top speed for each.
  21. This is incorrect. We thought we may have to require all disks spinning with dual parity, but that isn't the case. What is the speed penalty with Dual Parity?
  22. My setup is a bit more complicated, as it's based on ssh. I use ssh forwarding/tunneling to a host that houses a squid proxy. This can be a Rasperry Pi or unRAID itself. I use secureCRT (which is commercial) to foward a local host port tunneled over ssh to the squid proxy. When I need to, I login with ssh, I enable the proxy in the browser and connect to the server's ip address. It helps to have multiple browsers, one you usually use and one that you use with forwarding. chrome/firefox for example. It's a bit complex, but it's worked for me over the years when I can only gain a single ssh connection to my network. Over a guest's network that is unencumbered by complex firewall rules, I would use the openvpn.
  23. You do not have to use the dated directory naming scheme. I only mentioned as it provides a way to capture a file that is accidentally deleted. You would set up an rsyncd.conf file in /etc with the shares you want to export via rsync. then pull them with rsync -a rsync://tower1/sharename /mnt/sharenae or rsync -a rsync://tower1/disk1/ /mnt/disk1 There are plenty of examples on setting up a rsync server on the forum. I would suggest searching the forum a bit and then jumping in with our assistance.