JohnO

Members
  • Posts

    123
  • Joined

  • Last visited

Everything posted by JohnO

  1. This seems troubling. I wonder if the new kernel doesn't play well as a Guest OS to VMware ESXi -- or maybe needs a kernel flag set. I've got b15 (working fine) as a VMware ESXi 5.5 guest. I probably won't get a chance to try booting to the unRAID RC until next weekend. John
  2. When switching from unRAID 5.0.5 to unRAID 6-Beta15 I noticed that the loading /bzroot takes about 4 minutes, whereas the v5 boot was probably under a minute. I'd guess the file is much larger now, and it needs to be uncompressed as part of the startup process. Thankfully, I don't restart the unRAID VM that often. This is using PLOP.
  3. Wow. No. I get it now! Thanks to you and gundamguy for taking the time to re-pour the words back into one ear, because clearly, they had gone in one ear, and drained out the other! I see now it'll be a bit longer process than I had thought, but that's OK. I really appreciate it. Thanks, guys. John
  4. I'm glad to hear its working! Now be sure to test to see you are getting the performance you would like to see. John
  5. What I believe is happening (it's been a few years since this was my daily work focus) is that when one system is set for jumbo, and the other is not, the network switch in between is fragmenting the packets for you, as it knows that one system can't handle the jumbo frames. What type of network equipment is between the two servers? Can you confirm the jumbo frame size on that switch/switches? If you assume the switch is set to 9000+buffer, then I'd set both servers to a jumbo frame size of 9000 and test. If that doesn't work, set each server to a jumbo frame size of 8968 and test. Either way, you should have a lot fewer packet disassembly and reassembly activities that you would with a standard Ethernet frame size of 1500 bytes + buffer overhead. John
  6. I think this is finally starting to sink in... So you are suggesting that I should just stop the array, and replace both 1 TB drives for 2 TB drives, and have the unRAID parity system "magically" rebuild them on to the new 2 TB drives. These would then be RFS drives at this point. If I take this approach, I'm not sure the best way to get to XFS. I only have 4 slots on this controller card, and all are full (three data drives, one parity drive), thus the discussion of consolidating data from three data drives down to two as my first step. Sorry for the basic questions, but as these are fairly lengthy processes, I figure I'd better get it right to try to minimize down time. Thanks, John
  7. You need to make sure the frame size on your hosts is smaller than the frame size on the switches and routers. So, if the switches and routers are set to a jumbo frame size of 9014, you need to make the host frame size small enough to allow the jumbo frame header to be attached. At least for testing, you can decrease the frame size a bit on the hosts to ensure the jumbo packet is making it through the network infrastructure. https://en.wikipedia.org/wiki/Jumbo_frame#Super_jumbo_frames http://www.mylesgray.com/hardware/test-jumbo-frames-working/ John
  8. Just to clarify -- maybe I should not use the word "replace" but instead, I'll be add new drives to the array, and remove old drives from the array (with one removed/re-added drive being the same physical mechanism).
  9. Hmm... Maybe I'm missing something. I was not planning on rebuilding a drive from parity. My thought was to empty out one drive by moving it's contents to another existing drive, remove the empty drive and replace it with a larger drive, pre-clear the new larger drive, format it with XFS, and then move data to it from another existing drive. Once that is done, I would continue a similar process until the data has been migrated to drives formatted with XFS. In particular here is the layout: Current state: Future State: Parity: 2 TB (keep drive) Parity: 2 TB Disk1: 1 TB RFS (60% full) (replace drive) Disk1: 2 TB XFS Disk2: 1 TB RFS (40% full) (replace drive) Disk2: 2 TB XFS Disk3: 2 TB RFS (20% full) (keep drive) Disk3: 2 TB XFS My thought process is as follows: Move all data from Disk1 to Disk3, remove current Disk1. Install replacement for Disk 1 Move all data from Disk2 to new Disk 1, remove Disk2. Install replacement for Disk 2 Move all data from Disk3 to new Disk 2, re-format Disk3. Move some data from Disks 1 and 2 to Disk 3 to even out the disks somewhat. Does that make sense? Thanks, John
  10. Greetings, I've been reading and re-reading this thread over the last couple of weeks. I have two questions: 1) Before I can add a new, larger drive, I need to move data around so that I can remove a drive. I figure it's about about 600 GB of data. Would it be best to shut down the array to move that data? I assume in that case I'm safe to just telnet to the unRAID server and use the following rsync commands to move the data into the existing hierarchy, without having to create a temporary subdirectory that isn't part of the user share, correct? rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/ At that point, I should be able to start following the guide, I think. 2) Where does pre-clear of the new drive get done? Is it before the steps in bjp999's guide? It is mentioned, but it is above the steps, so i wasn't sure. 3) Also - in the steps above, as the array is running, are you at risk of duplicate files, and confusing things between steps 10 and 11? Thanks, John
  11. I just upgraded my NAS from unRAID 5.0.5 to 6.0-beta15 following the upgrade guide. The process went very smoothly. I ran a parity check over night, and re-enabled and tested services this morning. All seems to be working well. The new webGUI is really nice! FYI - my unRAID environment is itself a VM on VMware ESXi 5.5 with a passed-through disk controller. I'm not running any dockers or VM from within unRAID. John
  12. I want to pass along my thanks for your guide. I upgraded my 1 year old unRAID 5.0.5 NAS (which is a VM in a VMWare ESXi 5.5 server with a passed-through disk controller). The process went very smoothly. I did not have to modify the PLOP settings at all -- I just followed your guide to re-format the USB boot drive and update with software and configs as you indicated. One question/comment. In following the guide to copy over the files from the version 5 config directory, I note that it copies my older cache-dirs file. Should that copy be excluded from the copy, or deleted afterwards? It appears it is not active, but it might cause confusion. I run a very bare-bones system, with no plugins. I was configured with 1 GB of RAM, but since I had been reading in other places to go with 2 GB RAM, I did that. Of course, in a VM environment, the RAM allocated is somewhat smoke and mirrors, as you can over allocate. John
  13. It's unclear if you are interested in running Unraid as a guest in a different VM Host OS, or use the VM Hosting features of Unraid V6 as a Host. If you -are- interested in running Unraid as a guest, there is a whole section here to discuss it. http://lime-technology.com/forum/index.php?board=55.0 I've been running v5 for over a year as a guest VM on ESXi 5.x with no issues. John
  14. Cool. The 3rd edition is the most recent (previous) edition, so once you read through that, go visit your local Barnes and Noble and thumb through the newer version to see what changed -- Or grab the free table of contents from Amazon.
  15. Older versions are available used for a substantial savings. I see books in the $5.00 range on eBay. It'll still give you the 80% value for the architecture - some of the specifics have changed, but once you have the foundations, the rest you can search for on the Internet. DNS, syslog, DHCP, IP addressing, have been around a long, long time, and at the lower levels, really haven't changed all that much over time. http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2051337.m570.l1313.TR0.TRC0.H0.Xunix+system+administration.TRS0&_nkw=unix+system+administration&_sacat=267
  16. This is probably a lot deeper than you want to go, but to understand what is in the syslog, you need to understand the components that send messages to syslog -- that pretty much means you need to understand Unix/Linux at a deep, under the hood level. This book is (one of) the true bibles. It's been around (and revised) for ages. Reading the whole book really gives you a good understanding of the various technologies and how they all work together. http://www.admin.com John
  17. Do you have another chassis where you can test this card to rule out a bad card, or a conflict of some sort? Do you have (or can you set up) a different, bootable OS from a DVD or USB memory stick so you can see if the UnRaid O/S is the issue?
  18. That was fast! As I mentioned above, mine is in a box running VMware ESXi -- the drivers for that card are included in ESXi. What ever shows up to the Unraid guest OS is probably a VMware interface (which, I think emulates the Intel Pro 1000). Hopefully someone else with that board who is using it natively to boot Unraid can chime in. John
  19. I'm not sure. I'm using mine as two separate ports, and my base OS on this box is VMware ESXi, so even if my Unraid Guest OS saw the two interfaces, it would probably be seeing the VMware network interface layer.
  20. I added this 2-port card to my home server. Seems well supported. I run VMWare ESXi on this server (where Unraid is a guest OS). http://www.ebay.com/itm/Intel-Pro-1000-PT-Gigabit-Dual-Port-PCIe-NIC-Adapter-EXPI9402PTG2P20-Full-He-/291428273420?pt=LH_DefaultDomain_0&hash=item43da7a8d0c
  21. Hmm... Along the same lines, do you have another network interface you could try, even if only temporarily?
  22. Do you have this system set up for dual-boot by any chance? I had a somewhat similar problem where if I was running Windows, then rebooted to Linux, the network interface would work for about 5 minutes, then die. I had power-cycled the system from the "soft" power switch on the front on the machine, with no change in behavior. I unplugged the system from AC power and let it sit for 5 minutes. After that I booted straight into Linux, and the system has been fine for a year, including power cycling. I just can't boot to Windows and back to Linux without disconnecting from AC power.
  23. Yes. Sounds very logical. Thanks, John
  24. Yes please. As one who runs unRAID as a guest on VMware ESXi, I don't foresee adding another VM layer on my unRAID guest OS. John
  25. 1 GB of ram assigned to my 5.05 VM guest running unRAID. Swap hasn't been touched, and the system is not breathing hard (I've got two main directories, with 3-8 subdirectories each -- a very small set up.