callmeedin

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by callmeedin

  1. I think that the only thing that I needed to do prior to removing cache this is set the "Use Cache Disk" to NO in the Global Share Settings options. That in turn disables the Mover from appearing in the Scheduler.
  2. As soon as I sent my reply figured I am better off starting a new thread. :-)
  3. So I used a cache drive for a short while (while I was converting my disks to XFS). Is there a proper way to remove it? I just stopped the array and set the slot for cache to "no device". Restarted the array and everything is running OK, but I see under Global Share Settings/Cache Settings, the Use cache disk is set to Yes and I also see in the Settings/Scheduler the Mover Settings option. I don't remember if those where there prior to me adding a cache drive. Should have I followed a different procedure for removing the cache drive or is what I am seeing "normal" for a system without a cache drive. Thanks.
  4. Quick update: All drives converted to XFS and no lockups during the conversion process. I have disabled the cache drive (just don't need it for my setup) after the conversion RFS->XFS was completed on all disk drives and everything is running smooth so far. I do have a question about disabling the cache drive: I still the Mover in the schedule after I removed the cache drive. Is that normal (didn't really pay attention prior to adding cache drive)? If it is not normal, do I need to run the "new config" (and check parity already valid)!? or is seeing the mover in the schedule normal behaviour, regardless if cache drive present or not? I also see the "use cache disk" option set to YES under the Global Share Settings, but when I stopped the array, I can't change that setting if no cache disk is present. So what is the proper way to remove a cache disk?
  5. Still going strong. After what I all have hit the unRAID box with in the last 7 days, I am confident that the RFS is the issue. I have: 1. Moved 1.7T from older 2TB RFS drive to XFS Cache drive 2. Removed the above older 2TB RFS drive, to make space for 4TB drive, so a rebuild of Parity was in order 3. Manually moved the 1.7T data from cache drive to newly XFS formatted 4TB drive 4. Started moving data and reformatting drives to XFS ... working on the 3rd drive right now. Almost all this on my 2GB RAM and Sempron 145. I have yesterday upgraded to 10GB RAM and AMD Phenom II 2x and I can tell the box likes in particular the faster processor ... the dashboard doesn't have the CPU pegged at 100% anymore during any of these tasks. I don't see much usage of the added RAM. Still have all shares setup to use Cache and mover set to not run until the 1st of the months, so the mover doesn't decide to write to the still existing RFS drives. Again, thanks for all your help.
  6. Update & observation: Still no lockups with using the XFS Cache drive as the only drive to "write" to. So I am starting to feel confident that switching all the drives to XFS will fix my lockup issues. Now for the observation: In preparation to free up one of my drives to reformat it as XFS, I am using robocopy (on a virtual machine that lives outside of the Unraid server) to copy 1.7TB of data from one of the Unraid drives to a separate windows server. I noticed in the robocopy logs that in the span of 10+hours there were 5-6 network connection problems that robocopy encountered and "resolved" after waiting 30 seconds to retry. Whatever the cause of the network connection problem (my network, Unraid MB NIC, weak Unraid hardware, glitch in the matrix, etc.), I am wondering if ReiserFS is just not as good as dealing with network glitches during write operations which causes a lockup of the smb shares, while XFS can handle them. Sorry if I am stating something obvious here -- just wanted to share my observations.
  7. Thanks for the suggestion. Tried it and CPU definitely stays at 100% for quite some time. The CPU's that my motherboard supports are not expensive (used though), so will probably just pull the trigger and upgrade it. Can't hurt.
  8. Quick update -- so far so good with just running on Cache drive (formatted XFS): still running without smb lockups. Questions -- while I don't think the CPU & RAM are the cause, plan on upgrading them. I already ordered 8GB of RAM and was wondering if I would gain anything if I upgrade the CPU (currently AMD Sempron 145)!? Thinking that at least a faster CPU would help with parity checks, or is something else the bottleneck during parity checks!? If CPU upgrade makes sense, any recommendations as to what would be a worthy upgrade? Clearly this MB will only support CPU's that are used at this point, so looking at either AMD Phenom II X2 (or X4) version.
  9. Thank you for the details explanation. I am now setup the same way ... will see what happens. Thanks again to everybody for their help.
  10. BrianAZ -- that is very valuable info. Can you give me a little more details about how to perform the same test to just use one drive with XFS and basically write all new stuff to it. Seems to be that the writing to a ReiserFS disk can cause the problem, but not the reading from a Reiser FS disk, right!? So with your suggested "proof of concept" setup, I could still use all the files on the other disks, but would only write to the XFS disk!? This might be the route I take -- I would know within 7-10 days if writing to my ReiserFS disks is indeed my problem. I have never gone more than 5 days with v6 before a lockup happens.
  11. Thanks for the info on memory. The fresh, from scratch, v6 install just got hung up again -- or more exactly, the smb shared got hung up: web interface, telnet connection, etc. are all still working. Either way -- going back to 5.0.6. There is really no pressing need for me to be on v6, other than desire for the "latest and greatest", but I am not ready to convert 19 discs to XFS just yet.
  12. Was going to get 8GB. Should I get more? Is there a recommended value for unRAID?
  13. I listed the wrong MB. It is a ASRock 990FX Extreme 3. I will order and add more RAM -- that is easy.
  14. The disks are pretty fully ... all 2TB disks with free space between 125GB-250GB. So what was your solution to the problem? Switching them all to XFS!? I am just afraid to do that, since that is a point of no return and I couldn't go back to my stable v5 in case that the filesystem is not my problem to begin with. Good catch on the MB. In fact it is a ASRock 990FX Extreme 3 motherboard. Sorry about that. :-)
  15. Just rebuilt from scratch as suggested. Other than recreating Array Layout, shares & users, no other changes have been made to the default system. Let's see what happens. As far as hardware, not the most powerful, but wouldn't say it is OLD-OLD. I run a barebones unRAID with no addons. Hardware specs: ASRock Z77 Extreme4 ASRock 990FX Extreme 3 AMD Sempron 145 2GB DD3-1145 Single Module
  16. ASRock Z77 Extreme4 AMD Sempron 145 2GB DD3-1145 Single Module Listed wrong MB: It is a ASRock 990FX Extreme 3.
  17. Just checked and I have the Z77 Extreme 4 MB. Does that use the same chipset? 03:00.0 Ethernet controller [0200]: Broadcom Corporation NetLink BCM57781 Gigabit Ethernet PCIe [14e4:16b1] (rev 10) Subsystem: ASRock Incorporation Z77 Extreme4 motherboard [1849:96b1] Kernel driver in use: tg3 Kernel modules: tg3
  18. Adam64 -- how did you discover that it was an Intel i 218V ethernet issue and was there a fix (other than adding a new network card)? I to have a v6 system that becomes unusable every few days (opened a new post on it this moring), and I have a ASRcok Motherboard in it (like the original poster here), but not sure what onboard ethernet it has. Thanks.
  19. I have been running 5.0.8 without any problems ever since it has been released. It has been rock solid. When v6 first came out, I upgraded and other than having to add the parameter pci=realloc=off to my syslinux.cfg file (in order for the system to see the HSA cards), no other changes where needed. Within a few days of running v6, the system would lock up and the 2 samba shares that I have would be unresponsive and the only way for me to bring the system back online would be to hard reset the computer. I gave up on v6 rather quickly and reverted back to 5.0.8 -- rock solid again. Figured with v6.1.9 I give it another go, but same thing is happening -- within a few days the samba shares become unresponsive. I searched the forums, but no clear problem/solution has been identified -- I did see some reference to check & repair the reiserfs and I have done that: only one drive had some problems and I repaired them. But even after that, the lockups would happen again. It can take 1 day or even 4 days before it locks up. It does seem to happen during a transfer of files to the samba share (automated Sonarr transfers, for example). Since I know that v5.0.8 is stable, I revert back to it rather quickly, so I don't have to mess with the lockups. I finally decided to capture diagnostics and I am attaching them to this post. Any ideas/help would be appreciated. Thanks. sarajevo-diagnostics-20160428-2211.zip
  20. Adding boot parameter pci=realloc=off (as described in above link), fixed the issue. Up-and-running with 6.0.1 now.
  21. I think it is the identical issue discussed in this thread: https://lime-technology.com/forum/index.php?topic=39077.0 Will try adding the kernel parameter as described and report back.
  22. Upgraded both cards to latest firmware (20) and still having the same issue. Attached is the updated diagnostics zip. Thanks for all your help. Much appreciated. sarajevo-diagnostics-20150817-1840.zip
  23. Pro.key is already in the config folder in my 5.0.6 install, so yes, I copied it over to 6.0.1 install.
  24. Diagnostics zip attached. sarajevo-diagnostics-20150816-1008.zip
  25. Did a fresh upgrade to 6.0.1 (from 5.0.6) following the suggested way: reformat USB, fresh install of 6.0.1 & copying config folder, excluding GO file,etc. The server booted fine, but the Array doesn't start because Unraid only sees 5 of the 18 drives I have. Guessing it only sees the drives directly attached to the motherboard, but not the drives attached to the two IBM M1015 (IT mode flashed). Any ideas what would cause this in 6.0.1? I have seen reverted back to 5.0.6 and everything works fine. I did run the Diagnostics in 6.0.1 and have the diagnostics.zip with the various files. Anything particular from those files that I could post that would help figure out the issue. Thanks in advance for your help.