callmeedin

Members
  • Posts

    27
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

callmeedin's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I think that the only thing that I needed to do prior to removing cache this is set the "Use Cache Disk" to NO in the Global Share Settings options. That in turn disables the Mover from appearing in the Scheduler.
  2. As soon as I sent my reply figured I am better off starting a new thread. :-)
  3. So I used a cache drive for a short while (while I was converting my disks to XFS). Is there a proper way to remove it? I just stopped the array and set the slot for cache to "no device". Restarted the array and everything is running OK, but I see under Global Share Settings/Cache Settings, the Use cache disk is set to Yes and I also see in the Settings/Scheduler the Mover Settings option. I don't remember if those where there prior to me adding a cache drive. Should have I followed a different procedure for removing the cache drive or is what I am seeing "normal" for a system without a cache drive. Thanks.
  4. Quick update: All drives converted to XFS and no lockups during the conversion process. I have disabled the cache drive (just don't need it for my setup) after the conversion RFS->XFS was completed on all disk drives and everything is running smooth so far. I do have a question about disabling the cache drive: I still the Mover in the schedule after I removed the cache drive. Is that normal (didn't really pay attention prior to adding cache drive)? If it is not normal, do I need to run the "new config" (and check parity already valid)!? or is seeing the mover in the schedule normal behaviour, regardless if cache drive present or not? I also see the "use cache disk" option set to YES under the Global Share Settings, but when I stopped the array, I can't change that setting if no cache disk is present. So what is the proper way to remove a cache disk?
  5. Still going strong. After what I all have hit the unRAID box with in the last 7 days, I am confident that the RFS is the issue. I have: 1. Moved 1.7T from older 2TB RFS drive to XFS Cache drive 2. Removed the above older 2TB RFS drive, to make space for 4TB drive, so a rebuild of Parity was in order 3. Manually moved the 1.7T data from cache drive to newly XFS formatted 4TB drive 4. Started moving data and reformatting drives to XFS ... working on the 3rd drive right now. Almost all this on my 2GB RAM and Sempron 145. I have yesterday upgraded to 10GB RAM and AMD Phenom II 2x and I can tell the box likes in particular the faster processor ... the dashboard doesn't have the CPU pegged at 100% anymore during any of these tasks. I don't see much usage of the added RAM. Still have all shares setup to use Cache and mover set to not run until the 1st of the months, so the mover doesn't decide to write to the still existing RFS drives. Again, thanks for all your help.
  6. Update & observation: Still no lockups with using the XFS Cache drive as the only drive to "write" to. So I am starting to feel confident that switching all the drives to XFS will fix my lockup issues. Now for the observation: In preparation to free up one of my drives to reformat it as XFS, I am using robocopy (on a virtual machine that lives outside of the Unraid server) to copy 1.7TB of data from one of the Unraid drives to a separate windows server. I noticed in the robocopy logs that in the span of 10+hours there were 5-6 network connection problems that robocopy encountered and "resolved" after waiting 30 seconds to retry. Whatever the cause of the network connection problem (my network, Unraid MB NIC, weak Unraid hardware, glitch in the matrix, etc.), I am wondering if ReiserFS is just not as good as dealing with network glitches during write operations which causes a lockup of the smb shares, while XFS can handle them. Sorry if I am stating something obvious here -- just wanted to share my observations.
  7. Thanks for the suggestion. Tried it and CPU definitely stays at 100% for quite some time. The CPU's that my motherboard supports are not expensive (used though), so will probably just pull the trigger and upgrade it. Can't hurt.
  8. Quick update -- so far so good with just running on Cache drive (formatted XFS): still running without smb lockups. Questions -- while I don't think the CPU & RAM are the cause, plan on upgrading them. I already ordered 8GB of RAM and was wondering if I would gain anything if I upgrade the CPU (currently AMD Sempron 145)!? Thinking that at least a faster CPU would help with parity checks, or is something else the bottleneck during parity checks!? If CPU upgrade makes sense, any recommendations as to what would be a worthy upgrade? Clearly this MB will only support CPU's that are used at this point, so looking at either AMD Phenom II X2 (or X4) version.
  9. Thank you for the details explanation. I am now setup the same way ... will see what happens. Thanks again to everybody for their help.
  10. BrianAZ -- that is very valuable info. Can you give me a little more details about how to perform the same test to just use one drive with XFS and basically write all new stuff to it. Seems to be that the writing to a ReiserFS disk can cause the problem, but not the reading from a Reiser FS disk, right!? So with your suggested "proof of concept" setup, I could still use all the files on the other disks, but would only write to the XFS disk!? This might be the route I take -- I would know within 7-10 days if writing to my ReiserFS disks is indeed my problem. I have never gone more than 5 days with v6 before a lockup happens.
  11. Thanks for the info on memory. The fresh, from scratch, v6 install just got hung up again -- or more exactly, the smb shared got hung up: web interface, telnet connection, etc. are all still working. Either way -- going back to 5.0.6. There is really no pressing need for me to be on v6, other than desire for the "latest and greatest", but I am not ready to convert 19 discs to XFS just yet.
  12. Was going to get 8GB. Should I get more? Is there a recommended value for unRAID?
  13. I listed the wrong MB. It is a ASRock 990FX Extreme 3. I will order and add more RAM -- that is easy.
  14. The disks are pretty fully ... all 2TB disks with free space between 125GB-250GB. So what was your solution to the problem? Switching them all to XFS!? I am just afraid to do that, since that is a point of no return and I couldn't go back to my stable v5 in case that the filesystem is not my problem to begin with. Good catch on the MB. In fact it is a ASRock 990FX Extreme 3 motherboard. Sorry about that. :-)
  15. Just rebuilt from scratch as suggested. Other than recreating Array Layout, shares & users, no other changes have been made to the default system. Let's see what happens. As far as hardware, not the most powerful, but wouldn't say it is OLD-OLD. I run a barebones unRAID with no addons. Hardware specs: ASRock Z77 Extreme4 ASRock 990FX Extreme 3 AMD Sempron 145 2GB DD3-1145 Single Module