luca

Members
  • Posts

    164
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

luca's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. I found myself in the same situation: X520 card (intel 82599 based) on unRAID server and microtik CRS305-1G-4S+IN switch, 10Gtek transceivers at both end. The ping was solid, but the link was going down very briefly every few seconds. For some reason the Microtik switch came with RouterOS enabled. Since I did not need the router funcionality, I thought I rebooted to the switch OS. Just like that, the flapping on the X520 link is gone. I just checked, and the link did not go down once overnight. Maybe just a coincidence, not sure, as always YMMV, but I thought I mentioned it. Transceivers are 10Gtek AXS85-192-M3, and the PCIe the card is also from 10Gtek, model X520-10G-1S-X8.
  2. Yesterday I finally completed moving the data from the drives to be removed to the drives that were staying. The server is now rebuilding parity on the new config; the array was shrunk from 22 to 13 drives (plus parity plus cache). unRAID rocks.
  3. I do take a screenshot of the drives, from time to time, which has helped, over the years. Thank you to FreeMan and itimpi for helping me.
  4. @FreeMan So parity is the only disk I need to worry about in keeping the array integrity, that's going to make the whole process a lot easier. I don't believe I have any per-disk exclusions, but I'll double check. The server does have hot-swap cages, but don't mind stopping the array first to avoid any potential issues. @itimpi That's even better, thanks! Wow, unRAID very smart
  5. I'm planning to reduce the amount of data currently stored on my unraid server. This is what I was thinking, please correct any flaws: 1. remove a disk from the existing array 2. stop the array 3. upon starting the array, the removed disk shows up as "missing" 4. run tools / new config (keeping existing parity disk) 5. start array / run parity-sync 6. if parity-sync successful, rinse and repeat In step 4. when I re-assign the disks to the array, does it matter they are in the same slots as before, and do I need to respect any gaps (i.e. if disk 15 was unassigned before, does it still need to be unassigned in the new config?)
  6. Sorry, I'm a dummy. I didn't check the logs first. Apparently was caused by another xfs corruption problem (same drive as before too). After running xfs_repair on the drive the share is populated again. I'm curious if it's by design that everything seems to grind to a halt if there is a problem with one of the drives? Would not make sense to keep files that are still accessible available? This bit in the log was pointing out the problem (and the solution) all along:
  7. Unraid 6.2.4. I shrunk the array twice, removing 2 drives in succession, following this procedure (the "Clear Drive Then Remove Drive" Method): https://lime-technology.com/wiki/index.php/Shrink_array#Procedure_2 Both time I had no errors, and the array restarted fine with no need for a parity check. The content in one of the user shares (let's call it "data") is not shown. "data" was the only folder present on the drives I have removed, but of course its content was copied to the remaining drives before removing the drives. I can still see/access the files by disk share (\\fs2\disk1\data, \\fs2\disk2\data and so on), but the share itself is empty. I checked the Shares tab, and the "data" share is still a public share, with all drives included and none excluded. Also tried stopping/restarting the array. Any idea what is going on?
  8. I've done that. Rebooted, and the array started immediately. No apparent data loss, though there is now a lost+found folder in /disk10, with 0 bytes in it. Here's the repair output, just in case anyone cares:
  9. Thanks for replying guys. I'm in maintenance mode, and got this from xfs_repair: I'm thinking I should bite the bullet, and use -L , since mounting the drive is what was failing before. What do you recommend? Is it possible this is just FS corruption, and there is actually nothing wrong with the physical drive?
  10. unraid 6.19 server (fs2). I could not copy to it today (windows complained of I/O error), so I attempted restarting the system, but it hanged on unmount disks. I forced a reboot and booted up fine, all drives green, but it's been hanging again at "mounting disks...." for the past 20 minutes. The GUI is also unresponsive. From the syslog: I figured it's probably a drive failing or with cables that need re-seating, but which one? Also, what is md10 (all drives are relatively recent SATA drives: sda, sdb, etc.)? fs2.zip
  11. I searched the forum, but most hits were on things like adding or replacing drives on an existing array. I'm looking at expanding (merging?) the space available on an existing samba user share (say \\FS1\share) with that of a second share, located on a separate unraid server (let's call it \\FS3\share). My linux knowledge is rudimentary. Is that actually possible?
  12. Are you writing to a ReiserFS drive that is quite full? I write to a (samba) share. All the drives are XFS. The array is ~87% full, but there is still 4.4TB of total space available. The drive with the least free space still has 170GB free, and most of the other drives have > 300GB of free space.
  13. Try disabling network offload, see the Tips and Tweaks page. The plugin makes the change easy to perform. Your description sounds normal, image looks about normal to me, but perhaps I may be missing your point? Writes are super fast while there's cache room to fill. I'm not sure if your description is saying it's good, or it's bad, or it's back to normal after the first post? (Sorry for my confusion, I may be missing what you are trying to point out.) RobJ, I'm afraid did a very poor job at explaining. I'm definitely experiencing consistently slower than normal copies (down to a few KBps or even zero speed for stretches of time), when large files are involved. I did not seem to matter if: - the VM is running or not (except for a small speed bump if the VM was on, and is turned off during the copy) - the num_stripes fix is set to 8192 or left a default - network offload is on or off I actually went back to 6.19 last night, and to my surprise the problem persisted, so I'm now focusing at ruling out a problem at my end. I'll report back if I find anything.
  14. Can you install the stats plugin and run it on another screen to check how much memory is being used? Keep it on real time I have had this issue on my system where my VMs are by the looks of it leaking memory and using a lot more than they should - when i then do a transfer the remaining memory is used up to increase transfer times and doing anything with the array on my VMs during this time will crash samba and can often cause the GUI to lockup I have mentioned it a few times but for now to get round this i had to decrease my vm memory but the issue is coming back as the longer i leave the system on the more memory my vms use I've installed stats plugin and run a test copy: ~16GB file from my windows 10 PC to the main unraid server. The copy starts fast as usual, and I can see almost all RAM getting used up as "cache" (see pic), BTW, is that normal? When the ram is used up, speed slows down gradually. At around 17:22 I turn off the one VM I have running (it's now set to only use 2GB of ram), and I can see the copy speed up again (bump at 17:23), until the cache is full again. After 17:36 speed recovers around 20 MBps, and is stable until the end of the copy. http://i.imgur.com/5m66g5K.png
  15. On beta 21, I'm also experiencing problem with copying large files to the server. I do have a VM running, but the file transfers are usually from a secondary unraid server to my main unraid server, so nothing is copied to/from the VM. The copies start fine (70-80 Mbps), but within a few seconds they slow down to a crawl, and eventually stop altogether. At the same time unraid web gui becomes unresponsive. In the last few times this happened, I was able to stop the array, but not to restart the server (had to manually hit the reset button). Changing the num_stripes setting didn't seem to make any difference, the first large file copy hung the server again, however I was able to stop the array and restart the server as well from the gui, so maybe an improvement? I've never been able to see anything in the log suggesting a problem. After restarting, the server seems fine for a few days. fs1-syslog-20160529-1047.zip