Jump to content

MortenSchmidt

Members
  • Posts

    309
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by MortenSchmidt

  1. Those writes were to both data drives and parity, meaning the parity does not become invalidated. Each time a bit is flipped on a data drive, that bit is flipped on parity, this would have been the same if disk1 had been valid. From his description, this is exactly what happened. But nowhere in this process do we explain how his current/intended parity drive got invalidated. Let's try and help him first, then we can bash him for not keeping a backup later..
  2. Well, at least I had a theory to offer: Unraid has noted down disk1 as being unformatted in the config file, and therefore does not believe there is any data that can be rebuild onto it, and therefore decides not to simulate it. I could be wrong, but if there is a backup of the good config file avaiable, I would try restoring that. It would be the same configuration of disks as he has now, so should not hurt.
  3. But guys, how exactly would his parity become bad? He claims he didn't rebuild parity in the current configuration, and while in the invalid configuration, unraid would not write to the parity drive either...
  4. Also, agree with BJP about cloning the disk1 with dd. If you have a spare, use that and if not, go buy one. It is good practice to keep a spare disk on hand ready to go in case a drive dies. You want to always minimize the amount of time your array runs unprotected.
  5. What bothers me is we don't understand why his parity would be bad. Note that what I suggested was not to rebuild his disk1, but to uninstall it and check if the array will properly simulate it before doing so. It is still unclear (to me) whether unraid is actually simulating a disk1 from bad parity, and it is unclear from the steps described, how parity could be bad. Is it at all possible unraid noticed disk1 is unformatted, and indicated this in the present disk cfg file, thus not bothering simulating disk 1 at all? It was off that chance I recommended to restore the disk1 config WITH DISK1 PHYSICALLY DISCONNECTED. That config file should contain the same disk setup as he currently has.
  6. Yeah, I just looked at the screenshot again and see disk 1 shows up as both "not installed" and "unformatted". However, nothing in his description of events explain why the simulated disk1 would be corrupted/unformatted. First I thought disk1 was not being simulated, are we 100% sure it is in fact being simulated here? It may very well be BJP is correct, that restoring the disk config will not help, but on the other hand it is hard to see how it would do any harm (assuming you physically unplug disk1 so the array doesn't start automatically when you boot).
  7. OK, thanks for fully describing the steps that led to this. Hmmm, You may want to let BJP confirm this is the right action but I think you should stop your array and shut down. Then unplug disk1 physically in the box. If you are not 100% sure, then note the serial numbers (they are in your screenshots here) and go by that. Then take your usbdrive, and put on a fresh copy of disk config files from your old non-working USB key. Hopefully you still have a copy on your computer. than fire it up again and see if you get a simulated disk1. If you do (and I think you will), you're golden. Shut down and reinstall disk 1, and reassign to the array, it will then rebuild disk 1.
  8. Try and confirm via console. ls /mnt ls /mnt/disk1
  9. You can't. That was a step got before. Are you sure the filed are gone? What is at /mnt/disk1?
  10. I think you have left out a step in your description. Normally, parity would only be valid if you rebuild your parity, or forced it. Maybe you switched config files around and that is why it thinks parity is valid. One step you can take without risk is to run a NO-CORRECTING parity check. If you get lots of errors in the first minute, then you probably have good parity and can rebuild disk1 as I described. But hold off on that step till you hear from bjp.
  11. You do have a mess on your hand. Looks like you have done something to make parity valid after screwing up your disk 1. Hopefully you did not you run a parity check (=parity-rebuild)? If you did, that rebuild your parity to reflect the corrupted state of disk1 and effectively destroyed your good parity that you would have been able to use to rebuild disk 1. You will have to look at recovering files from disk1. Sorry, no clue how to do that. I bet you wish you had stopped and asked for help as soon as you realized you had corrupted disk1. Hopefully, parity is only showing as valid because you forced it with console commands. Sorry, I'm not going to read your 3-page thread. If that is the case, you should be able to simply stop the array, unassign disk1, start the array (which will then invalidate the data on the physical disk1, since you may be making changes to the virtual disk1 that is now available via parity), then stop the array, the reassign the now invalid disk1, and start array. This should start rebuilding your disk1 from good data and you will rejoice with much happiness.
  12. OK, I googled it, and it does seem you guys are right - x64 kernel does use more memory. Estimates I have seen range from 30% to 60% more. This is the best explanation I have seen: http://askubuntu.com/questions/7034/what-are-the-differences-between-32-bit-and-64-bit-and-which-should-i-choose But Hey, with unraid we are running rather large RAM-drives, so yes OS and applications will take more, but it will partly be offset by reducing the unraid RAMdrive from (in my case) ~550MB to something more reasonable. I still think it's feasible. If it's very tight, one could always use a 32-bit VM, or if more than one VM is required, some of them may be perfectly fine with 32-bit. The base requirements for console-only linux are really quite modest. Ubuntu CLI requires 192MB RAM (they mention 64-bit, but don't make a distinction between 32 vs 64-bit for min. RAM): https://help.ubuntu.com/community/Installation/SystemRequirements Slackware requires 64MB. No, not GB - MB (no mention of 64-bit at all, so assume that is for 32-bit only): http://www.slackware.com/install/sysreq.php Apps like Subsonic, sickbeard, couchpotato, Transmission and NZBget take very little RAM and adding on an extra 60% would be a non-issue. SabNZB would be a problem, but I don't see any reason to go back, NZBget is really good these days. Lastly, if one were to assign a VM with RAM on the tight side of things, and a swap file, that swapfile would be cached by the host VM, so it wouldn't slow down the VM to the same degree as a physical OS using swap. With fear of repeating myself, it is not primarily the monetary expense of adding RAM that concerns me, it's the need to validate stability that will take time and cause downtime. Going from 2 to 4 modules is not without risk, for example. Also, I don't own a monitor for my unraid box, and it has become too heavy to carry upstairs ;-) The fact that DDR2 ram is 50-70% more expensive than DDR3, and that I wouldn't be able to reuse DDR2 in a future DDR3 system is a minor concern as well. Sometimes less is more.
  13. I have a currently 14-drive (+ parity and cache) system ranging from 1TB to 4TB, and in 2 or 3 years have had 3 x 2TB WD greens and one 4TB Seagate fail. I'm guessing that is pretty average. Plus the countless pre-unraid drive failures. All hard drives WILL fail, it is only a matter of time. I keep a spare pre-cleared 4TB on hand so I can rebuild immediately when a drive fails without running off to the store and paying overprice for a new one. Small price to pay.
  14. Question for you. Have you ever had a harddrive fail on you? And are you sure you don't want parity? Unraid is very lean with no apps. See minimum requirements. http://lime-technology.com/wiki/index.php/FAQ#Minimum_System_Requirements.3F Speed will depend on your drives not your CPU. Single-core and 1GB will do fine if you don't run any addons, regardless of whether you use parity drive or not (and it would be stupid not to...)
  15. I am very well aware, thank you, but this does not answer my question. Why would it not be realistic to run a 1GB linux VM (which would on a 64-bit OS all reside in the host's 'low' memory), and leave the remaining 1GB for unraid's use (actual memory use, ram-drive and buffers)? 64 bit systems do not have "low" or "high" memory. Memory is treated as a single block. The question makes no sense; the subject of "low" memory is moot. The question is - Why would it not be realistic to run a a 2GB system with a 1GB linux VM, and leave the remaining 1GB for unraid's use (actual memory use, ram-drive and buffers)? I'm just trying to understand why everyone feels it's essential to have 4, 8 or even 16GB of RAM. I'm pre-disposed with an opinion that it is not a necessity, of course completely without having tried it... So will listen to reasons that make sense. I didn't bring up the low vs. high thing - that in deed makes no sense.
  16. I am very well aware, thank you, but this does not answer my question. Why would it not be realistic to run a 1GB linux VM (which would on a 64-bit OS all reside in the host's 'low' memory), and leave the remaining 1GB for unraid's use (actual memory use, ram-drive and buffers)?
  17. Out of curiosity, why the demand for big RAM? Unraid with no addons runs pretty smoothly with 1GB as I recall, and with linux being so efficient and the VM's filesystem no longer needing to be on RAM-drive, one would think 1GB for a VM would be sufficient as well? I'm guessing the VM doesn't really need a great amount of free memory for buffers and cache, since the unraid host will also be providing buffering. Perhaps 512MB for a no-desktop linux VM would be sufficient as well. I know people can upgrade RAM and the monetary expense is not all that great, but that would entail risk and downtime while testing stability, and eat away time/money from peoples new-mobo overhaul savings account (and in my case, my old system is DDR2 so the RAM would not even be reusable in that E3 xeon system I'd like some day). So I'm just wondering where the 1 + 1GB RAM allocation theory would really fall to pieces. Remember when you look at free memory on your 4GB+ system, linux is designed to put all of your memory to use in the form of buffers + cache, so there will never be much free memory regardless of how much you put in.
  18. As the topis says, I see 3 rsync commands running, each invoked at the same time and moving the same file. It is moving files SLOOOOWLY. After one file completes, there will be 3 rsync processes working on the next file. Any idea what is up with that? Running 5.0.5. Further detail: When issuing "ps -ef | grep rsync", I only saw one find process but 3 rsync processes. So without knowing a lot about how the mover works, it looks like the actual mover script is only running one instance, but for some wired reason is issuing 3 rsync commands for each file encountered by the find process.
  19. In the case I brought up, where unraid botched up while rebuilding a disk, there was a far better action to take. Reboot with a clean go script (no add-ons), and rebuild the disk again. Had I run a correcting parity check, I would not have had that option.
  20. Make sure you don't confuse unraid share settings with actual filesystem permissions. Both need to allow access. Sounds like you have problems with the latter? Google filesystem permissions and make sure you understand how they work. Also, there is a permissions mask setting in sab that will determine the permissions for the folders sab creates.
  21. Yeah, the theory is nice and all, but just a couple of releases back (4.6 and 4.7) there was a substantial bug in unraid that would cause a drive being rebuild to have errors in the very first part of it (the superblock I believe). This occurred for me several times, it is provoked by having addons running and accessing disks (changing the superblock) while the rebuild process starts. See my old topic on this: http://lime-technology.com/forum/index.php?topic=12884.msg122870#msg122870 Now, if you had that happen, then next time you run a correcting parity check, those errors will become permanent corruptions to the drive you had rebuild. I am very greteful to Joe for advising all of us to run NON-CORRECTING monthly parity checks, thanks to this my unraid server maintains a perfect record for never losing or corrupting any data (I was able successfully re-rebuild the disk in question by doing it without my addons running). Sure, the bug was eventually (after far, far , FAR too freaking long) corrected in unraid 5, but I say better safe than sorry. Non-correcting monthly parity checks are safest, and I would STILL like to see an option to automatically perform a non-correcting parity check after upgrading / rebuilding a disk.
  22. The other day, I was sorting some media, and somehow ended up with 5 duplicate files. This resulted in the the logger crashing and restarting less than a day after I had 'organized' those files. Looks to be because cachedirs keeps scanning the duplicated files, and each one is reported to syslog each time cache_dirs runs. Would have liket to attach a syslog, but... :-) Is it likely the cause is as I have described, and is there any way to avoid this (other than avoid creating duplicate files..) ? If not, can we talk about an extension to unrad_notify that would send an email when syslog gets over a certain size (or RAM filesystem is short on space) ?
  23. It hangs because unmenu is waiting for cache_dirs to return some output... which it will not because of the way it runs in the background. The post below your, or above mine as the case were, will work to start cache_dirs. I don't think that's accurate. When cache_dirs starts, it does output a string to the console. Looks like this on mine: cache_dirs process ID 5317 started, To terminate it, type: cache_dirs -q Further, I've now tried to invoke a script of mine that starts cache_dirs with my favorite arguments (that way it will always start with same arguments goth when called from go script and from unmenu). I've added an echo command - script looks like this: cache_dirs_args='-w -s -d 5 -e "Backup" -e "Games" -e "MP3BACKUP"' /boot/custom/cache_dirs/cache_dirs $cache_dirs_args echo "cache_dirs started in background with argsuments" $cache_dirs_args Still the same problem - unmenu hangs when I invoke my script (that I'm positively sure does output text). I can use the AT workarround (Thnx sacretagent), just curious as to why this happens with the way I had done it. Trying to learn here
  24. You make it look so easy, Joe So, do You have a specially customized Joe-version of the script for Your own use ? I have had cache-dirs on my to-do list for so long and only got around to installing it this week. Should have done it earlier, it's brilliant! Love the stop array detection, You are truly an artist at this. About the stop-detection, I have been thinking about adding some lines to kill other apps.. Not sure I want to keep waiting for 5.0 final.
×
×
  • Create New...