WeeboTech

Moderators
  • Posts

    9456
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. WeeboTech

    PC in a Desk

    At time I was doing recording it went to a 4track, 8 track, then mixed down to stereo directly into a sound card with sound forge. I haven't played or composed in a very long time.
  2. Many different methods to clear and/or certify a disk. I use unraid itself on a refurbished laptop with badblocks. I do a multipass pattern test. At the end of it I use the preclear script to add the signature. It's an old Dell E series laptop which has a eSATA port along with an ORICO Tool-Free USB 3.0 & eSATA to 2.5" & 3.5 " SATA External Hard Disk Drive Lay-Flat Docking Station HDD. It's a pretty compact temporary solution to getting the job done and away from the server. Granted it is an extra cost. A refurbished Dell can be scored pretty cheaply on eBay. E6510, E4310, E4300, etc, etc. These models have an eSATA port but do not support PMP. They are useful for diagnostics and/or clearing/certifying a disk. I'm sure a more modern refurbished laptop with USB 3.0 would suffice as well.
  3. Just had one start failing. The issue with these drives is death is usually sudden and catastrophic. Parity check good Aug 27th. Array Heathy good at 6:3am I start getting pending sector warnings. What is odd is the numbers keep going up and I wasn't even using the array. as I am trying to rsync the data over to another drive, the numbers just keep climbing.
  4. I can't really answer that. It wasn't enough to trigger any alarms in my apartment. Then again, It's always cool in my apartment. I am not using the LSI SAS 8212-4i4e in my gen 8 microserver. I am using the internal 'cougar' controller without issue. For my larger gen 8 server I am using an LSI SAS controller, however it is the 8 port internal model. I am using that in pass thru, however it is rarely used. Last I remember unRAID was having issue with the USB controllers and that was resolved in a later release. I have not used that particular server for unRAID in a while. Sorry I can't be much more help.
  5. My Gen 8's are using the internal controller without issue. I am using ESX 5.1 (HP) with the older mechanism of passing the disks through and unRAID 5(still) i.e. I am not passing the whole controller through with DirectPath I/O. I believe if you want to do that you will need to use the LSI. I tried, failed and moved on. The server's been up for 2 years without a reboot.
  6. I have a STARTECH ASMedia ASM1061 on Supermicro X9SCM-F and it worked right away. I also have it on some HP Micro Severs and it worked with unRAID as well.
  7. At one point I had 4gb and 8gb of ram with cache_pressure=0 and I would run out of ram. With my particular case I was rsyncing data with a dated backup using --link-dest= I had to back off to cache_pressure=10. There's another kernel parameter to expand the dentry queue. At that time, with that kernel, I could not find an advantage. What I did find to provide an advantage was an older kernel parameter sysctl vm.highmem_is_dirtyable=1 That changed the caching/flushing behavior of the system, however, I'm not sure how that would aid in cache_pressure vs cached inodes. It's not just about inodes themselves. From what I had read in the past it the dentry queue came into play too.
  8. I cannot remember the outcome in regards to reliability. If I remember correctly I retired it. There seemed to be weird issues with devices randomly dropping. I went with the 2 port Startech card ASMedia chipset.
  9. I'm not a proponent of a full pause (only). While I'm not an opponent of that feature request, I'm more of a proponent for a throttle option. The older md code had an option to throttle a minimum and maximum resync speed. There was also code to drop to the minimum speed if the subsystem was being used and raise to the maximum speed on idle subsystem access. If someone were to set that to 0 or a very low but acceptable value, so be it, a full pause might not be the best way to go. What if someone forgot it was on pause? Another choice might be to do incremental parity checks, but that entails keeping place (like pause).
  10. This post has already been reported 3 times. "Necroposting" is totally allowed here, the post is on topic, and the link within the post is the same as in this thread so it is not spam. Possibly the poster is just a drive-by who only registered to make this one post. Maybe he even has some axe to grind. But I am going to let it stand unless some other mod wants to bilge it. I'm also leaning on the side of caution since it is hard drive/unRAID Storage related.
  11. From what I gather via various reads we can't enable trim on an arraydrive. If it's not enabled, the SSD will operate slower over time. Yet that should make it safe for unRAID. as long as the trim/discards are not enabled. This makes sense to me. I thought this was a good read as far as trim. http://unix.stackexchange.com/questions/218076/ssd-how-often-should-i-do-fstrim Trim triggers writes to the blocks directly without the OS knowing what's going on. Thus actually erasing blocks. Invalidating parity. I think the other part is how garbage collection works. Perhaps we are joining the two when they should not be. Whle they work hand in hand to make the SSD efficient, Garbage collection may be safe where trim is definitely not. According to what I recently read It is 'discard' aware if mounted with discard.
  12. I think this is a very clever idea but I would be loathed to suggest the community takes on such an undertaking without LT sponsorship. We need to be working hand on this. This seems like allot of work and may be easily prone to error. Perhaps filling a SSD in the unRAID array, removing all of the files. Stop the array, Trigger a trim or wait for garbage collection to occur. Start the array and trigger a parity check. Given some data on another array drive. Replace that drive with a replacement candidate. Rebuild the replacement. Validate the new drive either by direct comparison or via hash sum checking. Those are pretty much the steps that would occur if a drive required replacement.
  13. Part of me thinks the whole internal block level reassignment is partially FUD. Where there is real data, an internal block move, for what ever reason in the firmware, should still equate to an external block request no matter where the firmware puts it. Just think of how our high level file systems would show corruption as blocks are moved and reassigned internally as cells decay. The issue being, what is returned from a block where the data has been deleted. If trim is not engaged there shouldn't be an issue. My understanding is, the firmware doesn't really know file system. It only knows blocks and if a trim command occurs on a block, it is erased and added to a free block list. How does garbage collection come into play here? How does the firmware know a block is no longer in use? Does it? If a block is re-assigned to a new spot, I would surmise the old block is tagged for garbage collection, yet a request for the orginal block gets data from the re-assigned spot.
  14. Maybe someone with an SSD or few SSD's in the array could test. Add a bunch of files to the SSD. Do a parity check. Remove some files. Add some other files. Do a parity check. wait a few days,weeks or so. Do another parity check. This or possibly a more advanced suite of tests. I would surmise that if people are doing monthly parity checks, this situation might have reared it's ugly head.
  15. This topic has been moved to KVM Hypervisor. [iurl]http://lime-technology.com/forum/index.php?topic=45700.0[/iurl]
  16. I gave up on this. With the cheap price of Android tablets these days. It makes more sense to use wifi, emhttp and a browser. My friend turned me on to a $40 android tablet. That vs the cost of an LCD device made me reconsider my time and effort to do this.
  17. Indeed. I suspect the third choice ("Auto") may eventually do what I had hoped it now already did -- i.e. use turbo write IF all of the disks were already spinning; normal write otherwise. My earlier post you referred to noted that this had been discussed -- I had hoped that v6.2 now had that implemented (but clearly it does not). But the presence of the third choice would certainly seem to imply that it may be coming :-) In my case, I want to manually control it via cron. It just makes sense for what I do all day long.
  18. I use turbo-write via cron. Turn it on when I am most likely to use it all day long, turn it off at night around bedtime. For some it may make sense to turn it on/off during the mover as well. Using my server all day long to move mp3's and re-tag them, turbo write helps reduce the wait time significantly. For those who may be interested, This is the cron table I install into /etc/cron.d/md_write_method 30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd 30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd # # * * * * * <command to be executed> # | | | | | # | | | | | # | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) # | | | +------ Month of the Year (range: 1-12) # | | +-------- Day of the Month (range: 1-31) # | +---------- Hour (range: 0-23) # +------------ Minute (range: 0-59) I find this is useful when you are reading and writing to one drive most of the time. Once there are reads and writes to the other drives, things slow down. Therefore it all depends on how a site uses a server.
  19. This topic has been moved to KVM Hypervisor. [iurl]https://lime-technology.com/forum/index.php?topic=48327.0[/iurl]
  20. I think it's more about a very lightweight unobtrusive OS. It's easier to install only what you need without a boat load of other dependencies.
  21. Since you already have potential corruption issues, you may want to use md5deep and create the external hash file. (probably on a known good drive) You can do both, but the bunker or bitrot may stimulate the failures more as they write meta data to the filesystem. If it were me, I would probably do both, md5deep first, then bunker. Once you do these you can use rsync to copy them and the md5deep/md5sum or bunker to verify them. When doing rsync remember the -X argument which will copy the extended attribute. I would suggest you read through this thread as there are lots of hints and commands that could help. correct commads to copy to XFS http://lime-technology.com/forum/index.php?topic=38507.msg357921#msg357921 I would also suggest setting up rsync as a server if going from one host to another. With the right entries in the rsync client and rsyncd.conf file you can get top speed (for unRAID) over the network. rsync over ssh is going to be slower. I've discussed this many times in many threads, but no particular thread comes to mind. Therefore I would suggest a forum search for rsyncd.conf.
  22. Thank you Weebo.... Good ideas. I will check the RAM. I tried converting one of my many reiser drives to XFS, and this is where I ran into some corruption. It made me not want to switch to XFS and then use the Checksum plugin. I will let it run Memtest overnight. Thanks again, H. Switch what you can to XFS after the other tests it may only be certain drives that are of issue.
  23. 1 double check your ram with memtest. 2 create hash checksums for all the files using one of the tools. 3 if you are on unRAID 6 reiserfs, you may want to consider migrating to XFS. I don't have any experience with the card in question. We've seen many of these corruption or questionable kernel issues because of ram or some lingering internal issue with reiserfs.
  24. Another idea might be to let the mover do it's own thing with a co-process monitoring disk usage. When the mover ends it can send a signal to the co-process or remove the co-process's pidfile thus signaling it should exit. Or perhaps, (and again I'm thinking out loud so people get ideas) add another -exec line to the find that calls du -hs on the share. In these cases, using du the progress would not be going up, but would be going down as files are moved from the cache share. You would at least know how much data is left. If you know your array writes generally get from 60MB/s to 30MB/s you can guesstimate duration.
  25. Some food for thought. FWIW, the mover writes it's pid in /var/run/mover.pid. If the gui was active it could grep lines from syslog matching the pid if it were tagged properly. The --progress option has useful information, but since it's designed for a terminal using \r instead of \n, it would not work well being piped into syslog via logger. In addition --progress tells you how many files and where you are in that list but that only works when doing one rsync to move a bunch of files. Currently the mover does a find and executes rsync many times, 1 rsync per 1 file. Changing mover to log to it's own log file and tailing that via the gui might work, but it's not necessary, The mover just needs to be tagged properly in the syslog with -tmover[pid] Also not all that difficult with a co-process in the mover depending on the bash version. I think the real gotcha for telling where you are in the list is the mover would need to be changed to capture how many files will be affected. Then provide some kind of indication of where it was in that list. I.E. a /bin/find capturing the file list, counting it, or iterating through it. It's much more complex then the current bash script that's in place using find and rsync.