Jump to content

Fuoman

Members
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Fuoman

  • Rank
    Member
  1. everything is working. thank you for all the assistance
  2. worked like a charm. this was my stupidity thinking the nic woulnt be aware as a dual port. thank you and sorry for my description of the issue.
  3. I have done this, just now. had to reboot my server and waiting to try connection. Ive tried to confirm all settings are set correct for smb and nfs sharing as well. i will try now with 192.168.3.2 and 192.168.3.3 for unraid 1 and unraid 2 then 192.168.2.3 and 192.168.2.4 for unraid2 and windows.
  4. hello Frank1940, this is not a diagnostics issue, it is not an issue with the servers or a configuration issue, it is the lack of a configuration that i need to figure out. my network is setup like this starting from first server and router. cisco 8port router. port 4 to onboard nic on first server unraid1 ip address is 192.168.1.81 this is the default port for webgui, docker access, truncated vm access using the internal routing in unraid. all my apps and things are router a 172.x.x.x ip. my second server is unraid2 connects to port 6 on my router with ip 192.168.1.112 this is the server that is new. Then windows pc is 192.168.1.45 on port 2 of router. This is all irrelevent as all 3 systems see each other without a problem however 1gb is slow. unraid1 array 1gb ports unraid2 main workstation windows workstation default 192.168.1.81 192.168.2.2 10gb port --------> 192.168.2.3 10gb port unraid2 , 192.168.2.4 10gb ----- > 192.168.2.5 10gb port docker 192.168.1.82 vm 192.168.1.134 192.168.1.45 1gb windows ip docker 192.168.1.83 vm 192.168.1.132 vm 192.168.1.84 default 192.168.1.112 everything with 192.168.1.x is all part of my home network all ip are assigned static everything with 192.168.2.x is not part of any router/switch. These are straight through to each other unraid 1 has a single mellenox x3 10gb nic with a 25m cable to port 1 on a dual intel x540 10gb nic on unraid2, port 2 goes to windows pc with mellenox x2 10gb port. windows recognizes unraid2 on 10gb and i can map shares to runaid 2 only(by design) i cannot get unraid 2 to mount a share from unraid 1 through 10gb only 1gb. i would like to use samba or nfs to mount a media share on unraid 1 to unraid 2 for my dockers to access using 10gb. what script,commands,routing or network wizardry can i do to make nfs or samba see the shares using these ip addresses for 10gb
  5. The servers have 2 seperate networks. 1gb is connected to a managed switch and router. 10gb is not. These are direct links, think of it as crossover. Over the 1gb link servers see and can ping each other without issue. The 10gb is direct from port 1 of the 10gb nic in middle server to the 10gb nic on first server. Port 2 on middle server is connected direct to 10gb port on windows machine. Windows to middle server works without issue. However middle server to first server does not. This is where I need a solution. 1gb is 192.168.1.0 255.255.255.0. 10gb is 192.168.2.2, 192.168.2.3, 192.168.2.4, 192.168.2.5 no gateway no subnet direct link.
  6. Has anyone made and headway on this topic. I am in this exact scenario, I recently sold one of my servers as I was running my array on an external jbod supermicro chassis and my cisco server was doing the lifting. I removed the cisco and moved a low power system into the supermicro chassis and now have a 10gb connection direct to my main pc(desktop) in my office that now runs a second unraid os to do the lifting. 3 reasons; Noise, power, newer hardware, is why I'm doing this. I want to keep my 26 disks in the supermicro chassis and utilize the 10gb link I have between the 2 systems. I have managed to get this to work with no issue over smb and nfs on a 1gb connection host to host via routing however I cannot get a direct link via 10g over an optical cable. my limitation I believe is in not having a gateway to tell the 2 hosts where to look on the separate subnet. Funny part is this all works like a dream in windows 10, I set a second 10g nic port to static 192.168.2.3 and unraid host 10g nic to 192.168.2.2 and they work with no issue in the windows file system. I have a 10gb managed switch but its loud and uses a crap load of power vs a active sfp+ cable which is the preferred option. If anyone has any thoughts or ideas please let me know. I am Ok at debian but no clue what to start with in slackware command line. Cheers
  7. Yes I have a 200mbps wan connection handled by my router vm on another server that is super quiet, this goes to my 1gbps switch. This side of the network is good. It handles all my non essential stuff as mentioned before. Then I had 6 connected to the 10gbps switch which all have 1gbps failover anyway. They really have no need for the 10gbps. The nics were free from work as well as the twinax cables so that's why I did that. The 10gbps switch it basically not essential and pulls 93 watts from the wall at idle. So I want to remove that. Then the cisco is just overkill and power hungry compared to modern hardware. Yes it's good quality and made for what it does but it pulls 300 watts roughly from the wall. So setup will be 1gb switch <> unraid1 <10g> unraid <10g> pc. This will all be on a second subnet seperate from the 1gb stuff. However both servers will have 1gb connections for web gui and dockers apps functionality outside of vm. Vm will have their own 1gb connection. Only need 2 vm hence the 2port 1gb nic all 1gb will be handled by primary router on primary dhcp. All the 10gb is for is for faster file transfers and if possible handle the data stream for media being on one server and docker being on the other. This is why I though it would be theoretically possible. I need to reduce noise and power consumption. I dont want to purchase more networking gear as it's not required YET. Not until they make affordable 10gbps wireless Aps.
  8. I currently utilize 10gb to all my servers and directly to my 2 main computers. I have a quanta l6bm 48 port 10gb switch. Problem is I want to eliminate this as I really only need 1gb to most of my stuff except 3 interconnects. Currently i have my intel x540 configured to connect to my main pc for file transfers. I do see around 2000mbps between my nvme drives. The reasoning for 2 unraid server is one will not be touched other than basically redundant nas, with a small website. The second server is for my main function and the 2 port nic is for iommu passthrough so my vm is on a seperate vlan in my network as well home assistant rather than bridge through unraid primary nic. System 2 will have 3 1gb nic ports and 2 10gb nic ports all fibre. I need 3 systems talking on 10gb the rest will be 1gb to my cisco 48 port switch such as cameras, smart tvs, aps etc.
  9. I have a peculiar idea I'm trying to accomplish. I'm trying to downsize my servers and would like to setup 2 unraid machines together. For simplicity they will be unraid 1 and unraid 2 Unraid 1 will be low power and have a supermicro x10slm motherboard with a intel celeron g3220 dual core with 8gb of ecc ram. Inside a 24 bay supermicro chassis. This system will have a 10gb nic feeding via fibre to unraid 2 as well as my lsi card for the backplane. Unraid 2 will have a e3 1230v3 with a dual port 10gb nic with 6 10k rpm drives in raid 0 cache as well as a pcie nvme optane drive. This will house my plex docker as well as a windows vm and a few other small dockers. It will also eventually have a quadro card. My main use is just for plex and a small website I host for my buisness. This system as unraid 2 will eventually become unraid 1 in the supermicro chassis and I will replace unraid 2 with a newer AMD system that will server as main unraid with plex docker etc as well as a windows vm for video editing and gaming. In theory this should work however I need to know what limitations unraid has as far as sharing media from one array to a docker container on another unraid system. 10gb fiber between them should be good enough for that link. As well the limitations of having a windows vm on the unraid 2 system write directly to the array on the unraid 1 system. Or any other thoughts on how this should be done. The idea is so I can replace a loud cisco ucs c240m3 server that doesnt support 3.5 inch drives as well as a loud 10gb switch. I'm trying to eliminate my 42u rack all together for a smaller rack for my switches and core router and array box. This is a I NEED HELP moment. Thanks in advance for any comments and advice. Cheers
  10. I have a peculiar idea I'm trying to accomplish. I'm trying to downsize my servers and would like to setup 2 unraid machines together. For simplicity they will be unraid 1 and unraid 2 Unraid 1 will be low power and have a supermicro x10slm motherboard with a intel celeron g3220 dual core with 8gb of ecc ram. Inside a 24 bay supermicro chassis. This system will have a 10gb nic feeding via fibre to unraid 2 as well as my lsi card for the backplane. Unraid 2 will have a e3 1230v3 with a dual port 10gb nic with 6 10k rpm drives in raid 0 cache as well as a pcie nvme optane drive. This will house my plex docker as well as a windows vm and a few other small dockers. It will also eventually have a quadro card. My main use is just for plex and a small website I host for my buisness. This system as unraid 2 will eventually become unraid 1 in the supermicro chassis and I will replace unraid 2 with a newer AMD system that will server as main unraid with plex docker etc as well as a windows vm for video editing and gaming. In theory this should work however I need to know what limitations unraid has as far as sharing media from one array to a docker container on another unraid system. 10gb fiber between them should be good enough for that link. As well the limitations of having a windows vm on the unraid 2 system write directly to the array on the unraid 1 system. Or any other thoughts on how this should be done. The idea is so I can replace a loud cisco ucs c240m3 server that doesnt support 3.5 inch drives as well as a loud 10gb switch. I'm trying to eliminate my 42u rack all together for a smaller rack for my switches and core router and array box. This is a I NEED HELP moment. Thanks in advance for any comments and advice. Cheers