Jump to content

Dr. Ew

Members
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Dr. Ew

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I can post diagnostics shortly. Is there anything I can do here. I tried a different port. No difference. When I bring the array up, system says no mountable file system in data array, and on the first cache drive, both prompting me to format. I think I need to create new config, and then reassign the drives, but I'm not sure.
  2. UnRAID keeps randomly losing the configuration of both Array and Cache Disks. It's usually just frustrating, but this time, on my cache array it is telling me all existing data will be overwritten upon starting the array. 12 disk raid10, all drives fine, online. However, of the 12 disks, unRAID only shows it remembers one of them. This happens after every few restarts. Typically I just re-assign the drive it doesn't have allocated, and I start the array. I've seen this once before, whee it tells me it will format all drives, I can't remember how I solved it. I have a small amount of data on the cache array, that I can't lose. Wha's my process for getting the array online without reformatting?
  3. I’ve posted on this topic before, but the topic got a bit sidetracked (by me). Are there any known issues in using NvME’s in a RAID5 Cache Pool? In one of my UnRAID servers, right now, there are 6 2tb NvME’s, RAID5. Writing to Cache Pool, very slow at 350MB/s, read at 750MB/S. As an unassigned Drive, a single NvME r/w at 900 MB/s+. I then tried 6 2.5” 1tb SSD’s in RAID 5, 1.4GB/s read and write. Same server, same settings. There must be some sort of bug or incompatibility, to be getting only 350MB/s on the array, when disk speed test shows each drive capable of 2000+ MB/s, and an array of the same number of 2.5” Drives fully saturates the line. I had a similar issue before, when testing 40GbE, but the r/w speed was capable of 10GbE saturation, just not 40GbE. I rebalanced the array too. No difference Any ideas here?
  4. I was going through the chelsio manual to tune a T-580. Next step was to unload the drivers. I did. The step after was to reload, but that is not working for me. The NIC now doesn't show up. Did everything I can think of. Can someone tell me how to get it back?
  5. Send me a PM, I can help you out? Has anyone had luck getting 10GbE to work in High Sierra? I spun up a few MacOS VM's and can't get networking active. I only tried in HS, trying in mojave next. I may have to plug in gigabitE to get it to work in HS. Curious if anyone else has the same issue?
  6. you can plug the card into a windows machine and adjust config there, if it isn't persistent. That's how I handles mine.
  7. This has been something on my to-do list for a while, and feeling like knocking it out now. I want to be able to send clients files from cloud storage. Instead of utilizing gDrive or Dropbox, I want to use OwnCloud or NextCloud, or alternative to create shareable links with a custom domain. So let's say either a domain I own or ddns domain. I want to send a client files to download with: Dr. Ooh has sent you a file to download: www.nARctic.cloud/xxxxxx.rar for example. That's the bare minimum. Ideally, the landing page the client reaches is custom as well. I'm thinking that's something I could set up in a web server on one of my servers. Yeah? How do I go about it?
  8. Oh, and also, along the same lines, I jumped down a rabbit hole searching for VNC or Virtual Desktop Software that can display multiple monitors from one VM; couldn't find much yet. Anyone know of software to accoomplish this? (It looks like another feature of Grid/MxGPU.
  9. GPU pass-through is no doubt a great tool for unraid, but I am very curious about virtualized GPU's. I am just starting to research it right now. But it seems NVIDIA Grid and AMD MxGPU are the available options. I could have sworn I heard of other, open source, ways to do it. The use-case would be for remote VM's. Is it possible to use my installed GPU's to provide graphics acceleration for virtual machines? In my layman mind, it seems to me, it should be possible to do with one VM, without advanced software. Or, by rerouting the output of the GPU somehow. MxGPU and GRID do this, but are geared toward multi-users, running lots of VM's. For me, it would be an option to purchase the Tesla or Quadro GPU's and software to do this, if unraid supports such. But, even better if I can use my current GPU's to accelerate VM Graphics. In my various unraid servers, I have RX580's, 1080TI's, 1060's, and an RTX2080. Wondering if I can utilize any of those for this purpose.
  10. Cool. Thank you. That’s definitely why I couldn’t get it to work. I’ll give that a go.
  11. thanks. I identified the enclosure, pulled it, all clear since then.
  12. I couldn’t find anywhere in documentation if unraid will support direct connection and connection through switch either. I’ve not been able to get that setup either. for instance, dual port NIc connecting to two separate subnets. Port 1 -> 10GbE Switch On Subnet 1 Port 2 -> Direct attach to client or server on Subnet 2. I’ve tried with no success. Is that possible too?
  13. Now that you mention it, a single unified view of all data is a desired goal, I didn’t know this was possible, but the general idea is upcoming on my list. I’d like to learn more. The reason behind the initial post is two-fold. I’m out of ports on both 40 and 10GbE Switches, and I need to add a few more servers to my network. Chelsio NiC provides for spider mode, allowing for 4 10GbE connections to a single server. Secondly, I’ve found that I am able to achieve faster transfer speed when initiated from VM’s. For instance I am able to achieve a 2.5 gigabyte per second transfer from a VM on Server 1 to any of my clients. Whereas initiating a transfer to the same client, from the same Share, from outside of the VM results in less than 5oo megabytes per second. No idea why. But a connection like that for select servers would allow me to create a separate network for machines that require that throughout but not that level of bandwidth.
  14. Is it possible to create a bridge between multiple physical servers? If so, how do I achieve? This is what I am looking to do with a couple of servers, either A or B. I want all clients to connect to a switch, then bonded NIC on the first server. All clients connect to Server 1 through the switch. All clients connect to server 2 and 3 through server 1. Server 2 and 3 ideally would share internet connection with Server 1. Only difference between option A and B is in A, servers 2 and 3 connect directly to 1. Server 3 connects to 2, 2 connects to 1. I assume there is no benefit there, and probably more complexity. Alternatively, It would actually be better for me to remove the switch from the equation. Could I connect a single client to Server 1, instead of the Switch, and then have either Option A or Option B? I tried setting this up with not much luck. But I likely wasn't approaching it in the right way.
  15. I did figure out the only as media device I have is a starch dual 2.5" to 3.5" raid box. I believe two of my cache drives are in it. Is there a way to ping the hdd light on the server that will show me which bay the device is in?