Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About vw-kombi

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have two movies folders over one movies share. One drive is almost full - 37GB left. Can I fill it ? Do I need to leave a certain amount of disk free for any reason ?
  2. I Just completed my first one of these new multi step parity checks. I was doing them weekly before. I have never had any errors in the parity checks in the past (50+ OF THEM). The first one of these with the plugin states 'completed with 1127 sync errors corrected'...... The parity read check history for this states 0 errors however. Any ideas ?
  3. Same - Ryzen 7 2700 (not the x). New Mobo is Asrock B450 Pro 4. New Case - Fractal Design R5. New DDR4 Ram - 4 x 8GB 2600's.
  4. Just reporting back that this all went fine. Here is my process in case it helps anyone : 1 - scn shot the current disk layout for array, cache and unassigned disks, Backups. 2 - Set array to NOT autostart, Set dockers and VMJ's to all not autostart 3 - in my case, had to disable the imodprobe gpu passthrough as going from intel to Ryzen 4 - shutdow array and machine 5 - install USK and disks to new server, which has monitor/keyboard and mouse (im usually headless) - Reason below 6 - Took time to label all disks as my eyes not so good reading those serial numbers 7 - booted to array controller - checked all disks attached 8 - booted to bios - checked disks on sata all showing 9 - Booted unraid, checked disks all showing, changed temp and fan to new mobo version in plugins 10 - Checked network up and running 11 - started array, did a few spot checks - all good, no parity build needed. 12 - started each vm one by one and let it update for the new hardware 13 - started dockers one by one to check all ok. 14 - reset VM'[s and dockers to autostart where they needed to, plus array to autostart Job Done.
  5. I found this topic after searching more ways : I will review and play with that method of allocating PCI devices instead.
  6. In reality, I dont have a problem allocating ethernet ports to the VM, but I have too many allocated, I need to pull two back. I added a 4 port ethernet card and used the edit below so that the 4 ports would be handed to the V<'s for pfsense. This was all great until I bought my new smart swithc and wanted to do some bonding for increased throughput - so I bought a new dual port ethernet card and plugged it in, but its code (8086:10c9) is the same as the original 4 port card - so now I have 6 ports for VM's and no better off....... Is there anything I can do to 'change' these numbers so the host can keep the new two port nic ? append vfio-pci.ids=8086:10c9 pcie_acs_override=multifunction initrd=/bzroot IOMMU group 17:[1022:43c7] 03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01) [8086:10c9] 04:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) [8086:10c9] 04:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) IOMMU group 18:[1022:43c7] 03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01) [12d8:2304] 06:00.0 PCI bridge: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch (rev 05) [12d8:2304] 07:01.0 PCI bridge: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch (rev 05) [12d8:2304] 07:02.0 PCI bridge: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch (rev 05) [8086:10c9] 08:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) [8086:10c9] 08:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) [8086:10c9] 0a:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) [8086:10c9] 0a:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
  7. Cheers for the info - Done so much googling. Sort of resigned to the cheapest option of buying a couple of cheap dual port NIC's for pfsense. But I also figured I may just buy one (WAN/OPT1) and then get a smart switch for OPT1 trunk so it can have a VLAN for LAN/IOT/GUEST with an ubiquety LR AP on there. Been wanting that setup and needed a push I guess.
  8. After much investigation, I settled on the Ryzen 7 2600 CPU, with Asrock B450 Pro4 and 32Gb Ram fopr my new server build to replace some old 2014 hardware. Now my first hurdle - This is my first non intel CPU and I have had loads of Intel builds over the years and they all have onboard graphics. I am not a gamer and have never ever bought a graphics card in my life so the build hit its first hurdle of 5 beeps - no graphics card (even through the mobo has all the connections for such. So - I learnt something new about most Ryzen CPU's and these mobo's - no GPU. That really needs to be advertised better. I went out and bought the cheapest video card I can find to complete the build and test it. As I use a load of VM's, I read with worry about all these graphics card passthrough issues people were having - even having to buy two graphics cards! I settled on an AMD R5 230 for $30. Added it in and server up and running, build all fine, unraid installed, added a few old test drives. I even had a VM ported over with no issues once it updated its drivers. And I could RDP to it fine and no issues with graphics cards - I don't think I even passed it through..... Not sure what all the fuss was about that I read in the forums. So - My issue. I cant remove this graphics card - or I get the 5 beeps. My unraid servers run headless in the garage - but will not boot without a graphics card. This mobo has 1 x pcie-16x (where the graphics card is), Then an extra pcie-4x (where my disk controller is connected now). But I have a 4 port NIC for PFSENSE that now cant fit anywhere....... In summary - am I stuffed ? Option 1 - buy 2 x 2 port nics on pcie x1 (there are 4 of these slots and I need one for the additional 2 drives). and hope they are supported, sell my 4 port server nic Option 2 - buy a more expensive mobo that has 3 PCIE x4 or better slots (cost about AUD$250, plus loss of AUD$150 for the other) Option 3 - are there any other options ? Thanks in advance. V
  9. Hmmmm - OK - thanks for the info. Im not very technical on dockers beyond installing from unraid community apps / docker hub. I will delete this one and keep looking.
  10. Thanks @FoxxMD...... I guess I need to know how to use it then ? I was hoping I would go to a config page, add my areas I want indexes, type of files etc, then have a search page of some type.
  11. When I go to the webgui for this, all I get is the below - should I be doing something else ? : { "name" : "HQdTNvT", "cluster_name" : "docker-cluster", "cluster_uuid" : "0jYRnGHBTimntwyla4MEIg", "version" : { "number" : "6.6.2", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "3bd3e59", "build_date" : "2019-03-06T15:16:26.864148Z", "build_snapshot" : false, "lucene_version" : "7.6.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }
  12. I am planning on a major upgrade - everything except the disks - moving from Intel to Ryzen. Latest version unraid sw. I will use an old data drive and a temp unraid usb key/license for the initial new server setup (burn in) of the hardware. Once it has burnt in and all hardware is working with unraid, I will be ready for the 'move' to this new hardware from the old existing system. (Note, I have many backups in case of issues) But - when it comes to the 'moving' of my unraid config from the current system to the new system, it cant be as simple as connecting all the the drives and moving the USB key to the new hardware, then boot up ? or is it that simple ? Is there anything ryzen specific that needs setting up or is 6.6.7 fully supporting Ryzen's ?
  13. Cheers @itimpl. Powered down and re-arranged some cabling ready, cut my hands to shreds on cpu cooler, then wont start - knocked power from mobo !!!!! Back in action now but it has made me think it may be time for a primary and secondary unraid systems.
  14. I read somewhere recently that the 12 disk connected license I have is a limit imposed when starting the array. I currently have 12 devices (1 x Parity, 6 x array drives, 1 x cache, 1 SSD Unassigned disk inside, 1 x hot swap 4TB drive backup, 2 x external USB Backup drives. I assumed I had no growth in the array without a license upgrade as that all add's up to 12 drives. I have a new drive to add to the array, and was either going to swap the two USB drives for a larger single one, or but the pro license upgrade for a single disk to be added. A this time I have been pulling a 4TB 'unassigned devices' drive out and slotting in an 8TB backup drive for my offsite backups, then repeat for other offsite backups - as I was assuming the license wont allow me to just plug an extra hot swap disk in. Seems I may have been wrong but I'm after confirmation. Q1 : Can I safely just slot disks in and out of the server (for unassigned devices use only - i.e backups) as long as I don't stop and hence restart the array with them connected ? Q2 : If the answer to Q1 is yes, then I guess I can disconnect the two USB drives, restart the array, add two new additional drives to the array, then reconnect the two USB backup drives (and make sure disconnected if ever a need to restart the array ?