TSM

Members
  • Posts

    268
  • Joined

  • Last visited

Everything posted by TSM

  1. Shouldn't the new GUI be baked into something like a 6.0 Beta12a?
  2. Any folder at the root of cache or an array disk is automatically a user share, so don't be surprised when this new folder you create is listed among your user shares. The disks are at /mnt/diskn, cache is at /mnt/cache, and the user shares are in /mnt/user, so mc is already aware of the user shares. There is also a /mnt/user0. This is the user shares excluding any files still on cache. Once you figure out mc and see how things are laid out you may find that moving from disk to disk is not as complicated as you might think. Each disk will have a subfolder for each of the user shares that include that disk. For example, if you have a TV share and it has some of its files on disk1, then those will be at /mnt/disk1/TV. If disk1 also has files for a file called Movies, then there will also be a /mnt/disk1/Movies. If TV also includes disk2 then those are at /mnt/disk2/TV and so on. mc by default is arranged with a left pane and a right pane, and some function buttons across the bottom for file operations. You can sort of think of it as similar to two Windows explorers windows, with one folder in the left pane, and another folder in the right pane. Your "selector" can be moved up and down with the cursor keys, and can be moved between the left and right panes with tab. There are plenty of guides on the internet, but it's pretty easy to figure out if you just play with it a little. Thanks for all your help trurl. I was able to get into Midnight Commander from the console, and I might muck around with something more powerful than telnet.exe, like putty with screen, in the future, if I want to run it sitting at my Windows 7 machine. But the 2 computers are sitting 2 feet from eachother, so as long as I have a monitor and keyboard connected to the unraid box, it's not that big a deal to use the console. With my original post, and the bug posting you linked to in mind, I guess the upshot is that no matter what tools I use to move the files off of the 6TB drive, it will be a manually intensive process, since I don't have a single disk with enough space on it to move all the files to. I'll probably need to manually move individual folders across 7 to 10 disks, to make this work. Even if I hadn't have migrated the smaller drives first, I'd have still had to spread it across maybe 4 to 7 disks. Thus why the idea of moving the files from disk to user share was appealing to me. Oh Well! Maybe I'll wait a few months to do this and ask my wife to get me another 6TB drive for my birthday. Or hey, maybe by then a higher performance 8TB drive will be out.
  3. Seems like you should move from disk to disk to better control where things end up since your objective is to make sure some disks get emptied while not making other disks too full. In any case, there is a known bug that you should be aware of. See here. Probably safer to just do disk to disk moves. Disk to disk moves using mc in screen is exactly what I did when I converted everything to XFS. Holy Crap!, thank you for the link to that bug post. I could have done something really bad. I'm surprised LimeTech hasn't made that a top priority to fix. Granted it will only cause a problem with specific situations for some users, but for those users in that situation it's a pretty big problem. (Like Me!) If I exclude the disk from the user share, and then create a new folder at the root of the disk. Move all of the top level "Share Name" folders into that folder. And then move them from within the new folder, that should work without a problem I think. Is there a way to make Midnight Commander aware of the user share? Like could you mount it somehow from the Linux command line before opening MC? I agree disk to disk would be better. But I'm trying to do this the easy way, and there might not be an easy way. As long as I have space in the entire user share to accomodate the contents of the drive, why would a single drive getting too full be a problem? Especially considering that once I get this step down, the drive in question will probably eventually get reformatted anyway.
  4. Thanks for the response. I've tried to use Midnight Commander before from a telnet window, but the screen just looked like a garbled mess. I'm at work right now, but when I get home I'll post a screen shot of it. Never tried it from the console, I guess I could attach a monitor to the box and do it from the console. Can I move files from a disk to a user share in Midnight commander? Does it intelligently handle folder merges and things like that?
  5. Hi everyone, I was hoping for some advice and remedial instruction. I want to convert all my drives to XFS. I've got 14 data drives of varied size and space used. I've also got files from 5 user shares strewn across pretty much each of the drives. All drives are included in each share. I started the process already by simply copying the contents of some of my smaller drives (1TB) onto my newest largest data drive(6TB), utilizing Windows explorer on my Windows 7 workstation over the network. Copying 1TB of data from one drive to another on the unraid via my Windows 7 machine, over my home network took a very long time. I'm not sure how long exactly, because each time there was an error or 2 that would pop up that I'd have to address, but I might not have known about it for a few hours. I'd say both of my 1TB drives, and a 1.5TB drive, each individually took less than a day, but more than 12 hours to clean off in this way. So using this methodology my larger drives would take several days. Plus I will soon get to the point where I don't have enough space on a single drive to move the contents of some of my larger drives to. My idea there for example, would be to exclude the drive from each of the user shares, and then copy it's contents onto each user share individually. I think there is enough space on the rest of the drives to accommodate the files I have on the 6TB drive, but they would all have to take some files. Once I did that it would be easy to move the files from the next largest drive onto the 6TB drive, and keep going like this until all drives are converted. But this method will be very time consuming, and seems a bit silly when logically I know there has to be a better way. I'd like to be able to do this all from the unraid box itself. Leaving the network and my Windows 7 machine out of it. Assuming I'm a complete newb, and know absolutely nothing, how would I go about this process? I'm already familiar with telneting to the sever, which I've done on occasion or 2 to complete specific tasks. Thanks, in advance
  6. I have had the same issue many times, and I always just replace the oldest drive.
  7. I came across this forum posting, where Garycase suspected the board would be fine. And I also found a few other forum posting mentioning the storage chipset in other contexts, but most were in reference to systems that were working. Not the fastest processor, but it should run circles around my current 1 core 2ghz celeron. http://lime-technology.com/forum/index.php?topic=34497.msg320960#msg320960
  8. Anyone know of a reason why this board wouldn't work with unraid? My mobo and proc is about 6 and a half years old now, time to start thinking about upgrading it. http://www.newegg.com/Product/Product.aspx?Item=N82E16813132230 It's a mini itx board, but you should be able to connect up to 18 sata drives to it without any plugin cards utilizing sas breakout cables.
  9. Article says the drives are slow. I wonder how they'll stack up against some of the slow poke green drives, cause I have several of those. They're slow, but not unworkable.
  10. that is awesome, thank you to you and Tom Will install it this weekend I've been contemplating virtualizing unraid with Hyper-V when I finally get around to upgrading my server hardware. I'm curious how this went. If you have anything to report?
  11. So the potential to store the data on a fast subsystem exists, but the complexity in doing so could be overwhelming to make it worth while. Worth while? I don't know. I'm usually very happy with my unraid, but sometimes it's so slow to traverse complex directory structures that could span a lot of disks, that the application I'm trying to use with the file I'm looking for will time out. I've had applications crashes because of this, and etc... Not to mention just being generally pissed off wondering when it will do whatever it is that it's doing to a particular folder in order to open it. It makes me wonder how many people have had this problem and then ran away from unraid to something else, without bothering to take the time to figure out what's really going on. I don't know, looking at it from that perspective this would seem important and maybe a worth while pursuit.
  12. I've got one of those supermicro cards. Every since upgrading to the 6 betas I've had weird problems with the server not waking back up after the drives have spun down. And then have to hard boot it for it to be functional again. For the past 8 days I have had the drives set to never spin down, and I've had no problems. Is the supermicro card the the reason why maybe?
  13. Maybe I missed something in all of this, and maybe my understanding of the concepts involved isn't good enough. But why wouldn't it be possible to have the cache dirs info in active memory dumped to disk on command or on a scheduled basis, and then just have the capability of reading that file into active memory on boot? I would think you would want this disk to not be an array member, but I don't know?
  14. I posted this in the Beta 9 announcement post this morning. "Has anyone else had the experience of their unraid server becoming completely unresponsive after upgrading to Beta9. 99% of the time, it's been working fine since upgrade. In fact, maybe a very tiny bit better than it used to. But the unresponsiveness behavior is very annoying, because when it happens, I don't know if it's the standard unresponsiveness that happens sometimes when the server hasn't been used for several hours, and the drives need to spinup, or if it's something worse happening. If the complete unresponsiveness is going to happen, it's much more likely to happen in the morning after my drives have been spun down for a while. I have the mover set to run at the default 3:40am time. When this unresponsiveness happens in the morning, I have to hard boot the server to get it back up again. And in the gui I can see that the mover did not run overnight. So it's not just the webgui that has stopped functioning, or a samba problem because I can't get to the shares on Windows 7, the linux software that allows the mover to run also stopped functioning. Also, and I don't know if this is related or not. The power down function no longer works either after upgrading to Beta9. There was another thread on the board somewhere, where someone else reported this behavior, and I posted that it happens to me to. The Reboot function appears to still be working, but not power down. I end up having to do a hard shutdown when I want the server to shutdown. I don't shutdown the server that frequently, I usually just reboot it when the need for that sort of thing arises. So honestly I'm not sure if that is a Beta9 thing or not. I know it used to work, on unraid 5.05, but I couldn't say at what point it stopped working, as I've uses a couple of the 6 beta's. " ____________________________________________________________________________________________ Well about 30 minutes ago, for the first time ever my unraid system really locked up hard while I was actually using it. So, drives not spun down or anything, just finished copying about 500megs of stuff to it. Attempted to copy a single small 29K file over to it, and no go. Further investigation revealed the thing to be locked up tight. I've been using unraid since 2008 and that has never happened before. Granted a lot of the components in my server are circa 2008, including motherboard, memory, processor and etc. So I guess it could possibly be a hardware fault, but I have my doubts since others are experiencing similar things. No VT-d option in my board's bios. Probably too old. Not using any plugins, or playing around with any of the new-fangled virtual stuff. The only fancy thing I'm doing is I have 2 SSD drives in a mirror for my cache.
  15. I find that, sometimes, the web gui appears to lock up - it can take up to a minute for it to respond. No idea what causes it and there's nothing of note in the syslog. In my experience, and what I've seen written by others, that's just unraid. Sometimes it locks up for a minute. Not really locking up though, it's doing something like reading from a drive usually. Pulling it's file system into active memory or something. I'm sure there are others here who could explain much better than I. I think the cache dirs plugin is supposed to help with this by keeping the drive file system in active memory, but I've never fooled around with it personally.
  16. Has anyone else had the experience of their unraid server becoming completely unresponsive after upgrading to Beta9. 99% of the time, it's been working fine since upgrade. In fact, maybe a very tiny bit better than it used to. But the unresponsiveness behavior is very annoying, because when it happens, I don't know if it's the standard unresponsiveness that happens sometimes when the server hasn't been used for several hours, and the drives need to spinup, or if it's something worse happening. If the complete unresponsiveness is going to happen, it's much more likely to happen in the morning after my drives have been spun down for a while. I have the mover set to run at the default 3:40am time. When this unresponsiveness happens in the morning, I have to hard boot the server to get it back up again. And in the gui I can see that the mover did not run overnight. So it's not just the webgui that has stopped functioning, or a samba problem because I can't get to the shares on Windows 7, the linux software that allows the mover to run also stopped functioning. Also, and I don't know if this is related or not. The power down function no longer works either after upgrading to Beta9. There was another thread on the board somewhere, where someone else reported this behavior, and I posted that it happens to me to. The Reboot function appears to still be working, but not power down. I end up having to do a hard shutdown when I want the server to shutdown. I don't shutdown the server that frequently, I usually just reboot it when the need for that sort of thing arises. So honestly I'm not sure if that is a Beta9 thing or not. I know it used to work, on unraid 5.05, but I couldn't say at what point it stopped working, as I've uses a couple of the 6 beta's.
  17. This worries me. I was about to upgrade from Beta7 to Beta9. I have btrfs formatted mirrored ssd cache drives. Because of the current problems, I haven't let mover run, and I've got about 60gigs of files on my cache. Will the files and the formatting of the cache still be in place after upgrading? Said screw it. Copied the contents of my cache drive to my google drive. Got a free 100gigs when I bought my chromebook 8 months ago. Upgrade from Beta7 to Beta9 seems to have gone smooth. Cache drives look normal. Ran mover manually seems fine. Then reset it back to nightly schedule. Your mileage may vary, f**k if I know if it will work as well for you.
  18. This worries me. I was about to upgrade from Beta7 to Beta9. I have btrfs formatted mirrored ssd cache drives. Because of the current problems, I haven't let mover run, and I've got about 60gigs of files on my cache. Will the files and the formatting of the cache still be in place after upgrading?
  19. It occurs to me that the question in my last post might been hard to discern because of the way the quoting levels made the post look. My question was... "If the space reporting is accurate, granted that it might not be, it says that 2.25gig is being used. And I swear that an unmodified docker folder is currently the only thing on the cache pool. I didn't have a cache drive before installing beta6. And on beta6 it was listed as being between 10 and 20 megs usually. And I didn't care. And granted that 2.25gig isn't really that much either, but on a 128gig pool, it's enough to be annoying to me if true."
  20. TSM

    Dual Parity

    I really dislike the fact that every time, EVERY TIME!!! Dual parity drives comes up as a topic, backups always become part of the conversation. The necessity for dual or more parity drives, and the best methodology for backing up your unraid box are really 2 different discussions. Both discussions have merit, and are somewhat related to each other, but they are 2 different discussions.
  21. Leave it alone. Its 10MB. Not hurting anything... If the space reporting is accurate, granted that it might not be, it says that 2.25gig is being used. And I swear that an unmodified docker folder is currently the only thing on the cache pool. I didn't have a cache drive before installing beta6. And on beta6 it was listed as being between 10 and 20 megs usually. And I didn't care. And granted that 2.25gig isn't really that much either, but on a 128gig pool, it's enough to be annoying to me if true.
  22. TSM

    Extra redundancy

    Would developing this delay the release of an unraid version with daul parity? If it's all the same, go for it, awesome. I'd only use 2 parity drives, but I'm sure there are those who would sleep easier at night with 3 parity drives.
  23. TSM

    Dual Parity

    Some great ideas are being discussed in this section, but for me this is the most important. In other threads about adding a second parity drive some "helpful person" always chimes in about how having a good backup solution is more important than the addition of this feature would be, and then it's another year before someone brings it up again. If you don't want unraid to get this feature, then when it's finally implemented you don't have to use it.
  24. Add my support, definitely necessary and seems like something that would be easy to do. I am very color blind.