Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Thanks, I have BTRFS Raid1 Cache already, this is additional.
  2. Oh fantastic! That's exactly what I was looking for, thanks for the tip!
  3. One thing to think about is if you're doing any docker automation on NVME, most setups will move those files from the NVME then to the cache and then to the array. Thus, you are decreasing the lifetime of both flash drives simultaneously, whereas if you just went to NVME for the whole thing, it would be one write because of the way moves on file systems work within a drive. I am currently considering doing a dual NVME CACHE in RAID-1 setup to get around this. It's so expensive to get decent endurance ratings on these things, so this is another way to help with that (for me at least).
  4. Yes, I'm aware - I posted about this elsewhere but no reply cause I gave too much detail I don't want to use my existing cache drive (BTRFS Raid-1), the SSD endurance won't handle the VM's and dockers I have which are constant IO. I had one in unassigned devices (enterprise grade 1TB Samsung), but it died. So I thought how can I get redundant SSD's without having to sell / throw away my current cache drives which are perfectly good? So I figure an additional standard Raid-1 Mirror, which somehow I have to mount in Unraid, since unraid doesn't as far as I know provide this capability through the GUI.
  5. Has anyone done this? I'm thinking about setting one up. I'm not sure yet how I'd make it persistent, but assuming I can it would be great for VM's and docker I think.
  6. Yes, I have backups - thank you CrashPlan, and only one SSD drive failed, a Samsung, used for VM's and dockers. The gotcha I didn't think about, is CrashPlan is on the drive that failed, where also the dockers are, so I've had to set up CrashPlan again, which requires a very large file sync, then a block sync meaning I can't get the data back for probably 3 or 4 days. Design change number one - specify CrashPlan docker to run off the mirrored cache drive at very least. So, on that, for my cache drive I use 2x500GB Samsung Evo's. Not the flashiest, but all good for file cache. However VM's and dockers are a different story, given they do an awful lot of writes continuously which consumer SSD's don't like. So what I'm thinking is buying two new SSD's with decent IOPS and a decent endurance rating, breaking out the shell and making my own SSD mirror, because as I understand it Unraid won't do it unless it's for a cache which I already have and don't really think it would be cost effective to change. Besides, I'd then have to replace my 2 500GB Ssd's and I'd have no use for them given unraid won't take them except as an unassigned disk. My data disks are all good in a unraid array, so I'm just leaving that out of this conversation. Intel S3700 seems to be the King of endurance out there, but would have to get second hand from eBay due to the price and then even with robustness you don't know where they've been. Could be good though. Out of interest, the drive that died, which shouldn't have really is an Enterprise Samsung SM863 960GB. As of this morning it thinks it's a 1GB and smart doesn't detect the drive and who knows what - seems like the firmware got in a knot and I've tried before and haven't been able to get firmware for the drive - so I don't think I can try refreshing it. Should be under warranty but Samsung are being a right PITA placing the burden of service on the customer as per here. Go Samsung! My next drive will not be Samsung, reading all the other complaints around about their service. So how would you set it up if you used VM's and dockers and had two existing 500GB drives in a mirrored unraid cache that you didn't want to use for VM's and dockers? Thanks.
  7. Yeah, I'd be more concerned about having a decent firewall, than opening a port. I use opnsense which has some nice intrusion detection and prevention, but there are others like pfsense, IPFire to name three but there are heaps. I'd be interested to know what a good hardware based one is for under $200 as I've been looking for a while and haven't found anything. One of the big challenges these days is the companies can't scale their offerings to consumers by lower CPU etc as consumers have got Gigabit connections all over the place. That's one of my problems, the off the shelf offerings slow down my connections to a crawl.
  8. I'm got a bit lost about what you're trying to do in the middle, but if all you want to do is keep the HDD's and run Unraid on a different board while setting something else up you can just swap your unraid usb to that new hardware and then back again. There's not a limit to types of hardware, just how many times you can write a licence to a key. I've swapped mine around a few times and it didn't matter. Secondly, the HDD's don't actually need to be in the same order unless you've got a very strange (or possibly old) setup. They assign to Id's which are specific to the drives. I've swapped cables, drive locations and all sorts around many times and it always comes back. This is normal for a modern linux distro. Perhaps if you had an older unraid version installed before this feature was implemented and followed a path of upgrades I could see it being an issue, but that would be all. (A longer term user of unraid may be able to clarify this further). If they fit, I'd keep all your drive in the new system. If they don't, you could configure the system how you want before moving it (unplugging the drives after so you don't get confused about what you disabled) and then install it / bring it up in the new one. That's how I'd approach it anyway. I'd leave the parity on, because you know when you move things around, sometimes things fail and you might need it. Hope that helps.
  9. What kind of link would prove to you that it was OK? I'd take a guess at only a court case? I could do a google and try and find it again, the problem is no-body seems to want to test the theory - legal theory or not. What changes it from theory? Only a court case I expect. And nobody wants to go there, unless we find google adopts it or something I'd say fat chance.
  10. LOL, no, I came to unraid for a reason. And I am so sick of hearing people argue over at FreeNAS about ECC ram. The Unraid raid driver is built by lime tech and to that end if an option can be dreamed up, I'd agree with going that route. Ultimately yes, self healing is what I am salivating over, along with the whole shebang of ZFS - like many others. However, most linux systems include support for all the filesystems and there are pieces of each that can be useful in different scenarios. So on one hand yes, it would be great to build in some self healing into unraid, but there is all sorts of other stuff ZFS has too. Who knows maybe there's something in the code that could be brought in. Also I don't think we can say it's not RAID when you think about what RAID stands for. RAID HAS typically been striped or mirrored data, but nothing says it has to be. Primarily lime tech just got rid of the striping and figured out that meant you didn't have to have the same size disks. Netgear also figured this out (very similar) with their X-RAID implementation on ReadyNAS, and I know there's at least one other out there.
  11. I think the Licencing concerns do exist, but as concerns not as facts. Pretty sure that's been proven illegitimate now, of course that the concern still exists in peoples minds is enough to be a problem.
  12. This would be great, particularly having some kind of fan based GUI with naming, there must be a few existing OS versions out there to use. +1.
  13. Yep, coming from some other NAS type tools, this end of the spectrum is a bit lacklustre in Unraid. Though I hold hope that it will flourish and be as well implemented as the rest of unraid. +1 from me!
  14. "Assigning interfaces is possible from the GUI (see Interface Rules)." - yet it's still very limited and clearly didn't work for me. "It shouldn't. With a correct procedure a single reboot is required". - Maybe I should have changed to 'why it requires -any- reboots' for a network change. It shouldn't be so hard. "When tossing interfaces, it requires to move the physical cable" - All cards had a cable (yes 3 physical cables), so I don't think this is relevant (assuming I am correctly interpreting what 'tossing interfaces' means). Basically I'm just trying to say, in this day and age, and with Unraid having so much reliance around docker and VM's networking we can probably do better (and probably should do better), via GUI. Unraid is awesome, it would be more awesome with some semi-recent GUI network config tool. Right now it's very very basic. Attached the QNAP one below, just cause it's what I know about, being that QNAP is on linux, basic and based around open source community. Good to think about, in terms of where we could be in the future, not where we must be, just where we could aim. Amongst the NAS's QNAP does have an edge when it comes to customer oriented features so I tend to look toward them and ask how and why. Why should we suppress thinking forward?
  15. So that worked, I now have eth0, eth1 and eth2. Thanks for your help. Other thoughts, which should probably be in a new thread include, "Questioning the wisdom of bonding a 1G and 10G interface together", "Why the unraid GUI has such a limited capability for network configuration compared to e.g. a QNAP NAS", "Why it requires so many reboots", "Why it loses connection and requires local console access". But I'll leave those for a time when I can bring together the examples. Perhaps some of it was caused by what you've just helped me solve. Thanks again for your help!
  16. OK, just tried it. Stopped array. Deleted config. Renamed eth1 to eth0 in the network rules. Rebooted. Now only have eth0 which is good although it said the eth0 cable was unplugged for some reason. Configured static IP again. Rebooted again. Now it's working. So this 'seems' to be good. Now doing an additional reboot to see if it sticks, if it does will add the additional card back in to see what happens.
  17. OK makes sense. Will try when I’m back - seems like network rules should be in the GUI.
  18. So done the card swap. Actually rebooted a few times without the card in and without a config loaded to make sure. So here's the thing, without an external card and only a single onboard Nic in the computer, no network configuration file loaded it starts with eth0 and eth1 bonded (even though there is not an eth0 and an eth1). Going through the syslog, to my reading, eth0 is being renamed to eth1 and eth1 is considered to be in a down state. Go figure. Have updated the latest config in case you can see something I didn't. obi-wan-diagnostics-20190617-1943.zip
  19. Thanks for your help on this one - BTW the out of tree module has not been added by me, so I assume it's a non-fully tested kernel module or something that comes with the kernel, or added by Unraid. I'll do the card swap out shortly.
  20. Actually, I don't recall, but I was just wondering if I should do that. I did delete the config, but not with the card out. It's a good thought. I still wish the GUI worked properly though, but this might go toward showing that it doesn't. Also it probably doesn't account for why only a single port is showing up, though it could also reset from being taken out or something. I did of course swap the card this morning, but chances are the Intel 10G and 1G use the same driver.
  21. Thanks, yes BIOS already checked and I did bind it on the old card (but not this one) - and it was still exhibiting odd behaviour to be honest. My other thread was an attempt to list all the things that didn't work and address / make an enhancement request. If we accept that this is not working ideally, I would ultimately like to help fix it properly so that the GUI can be used like it should be used, rather than hack it if you know what I mean. I would have thought others might have had these issues too given the very common motherboard and NIC.
  22. Somewhere I wrote a long list of all the problems I've experienced with Unraid's network management GUI, but I can't find it right now, so I'm just starting a smaller thread here, with one main specific issue. I have an onboard NIC, which refuses to be eth0 by default. It's extremely frustrating when you want to pass through your other NIC for latency reasons, or otherwise, want to remote manage your machine and for some reason when clicking apply on any changes, requires you to go directly to the unraid machine and log in with GUI mode as it loses connection. Of course, I've been told that this is my fault (in the other thread) and it's my configuration issue. Anything's possible I suppose - maybe you can help me figure it out. So, to reset I delete the /boot/config/network.cfg and reboot Of course all cards are then default assigned to a bond, which I don't want, so I turn bonding off and click apply. At this point I have: Eth0 - Assigned by the system to one of the dual port add on NIC ports Eth1 - Assigned by the system to the onboard NIC) (From an Asus X399-A) Eth2 - Nowhere to be seen, don't know why In the interface rules, there is only one Mac address listed, which is the onboard NIC and which is assigned to ETH1. I have today tried a different NIC (a dual port 10G rather than 1G, both by intel - same symptoms) My earlier, rather frustrated post started here after spending hours trying to use various suggestions to fix the issue. The poster on that thread I recall told me it was simple and that I can choose it via the MAC address in the interface rules, however there is only one Mac address listed. My expectation is that the GUI would see all ports and enable me to choose which NIC is default, which address is on which physical port and split out the dual ports on the add on card. None of which seems to happen for me. Something that confused me for a while - the interface description field isn't a physical interface description, rather a virtual interface description and therefore changes depending on configuration. i.e. if I name the interfaces internal / external so I don't have to look up Mac addresses all the time, the card can change depending on various factors including if I were to include eth0 or eth1 in it and if I were to choose a bond. I have attached diagnostics, however bear in mind I can only do this while the interfaces are bonded, another nice little quirk is that while the card is in, neither interface seems to operate stand alone. And I'm again experiencing similar symptoms to my other post where: 1 - A reboot of the server is required (possibly only when changing from between bonding and non-bonding modes) to apply network changes, or possibly just because I can't get any communication over either interface with the card in and seperate IP addresses on the cards 2 - You can't make changes except on the local machine as you lose connection and can't restart the box remotely (at least for me and again possibly because I can't communicate with the external card in and individual addressing). Highly frustrating. And so I must run a bond between a 1G and 10G card, if the card is in - which is certainly not desirable as I'd like to use the 10G card for other things. Any (helpful) thoughts? In 25+ years of IT work, I don't recall ever having such issues with NIC assignment. The only other slightly helpful thing I can think of was I had identified that on the dual 1G card, Unraid was assigning a driver that's explicitly stated by Intel to be for single port cards only - there could be something there I suppose. And another thing I just discovered, communication IS happening over the onboard NIC when bonded and ironically (at least with the 10G card in) unplugging the onboard NIC stops all communication, even though the external Nic is part of the bond. obi-wan-diagnostics-20190617-0203.zip
  23. At a guess, most routers which are locked down like that also have upnp enabled, which will automatically open any required ports. If Unraid supports that (or you can force it to), then you'd be in business. It's very convenient, though in my view a bit of a security risk, nevertheless you could try that route (no pun intended).