nry320i

Members
  • Posts

    9
  • Joined

  • Last visited

nry320i's Achievements

Noob

Noob (1/14)

0

Reputation

  1. That worked thanks! ./ makes sense now that I think about it because ./ is used in Linux to run a locally executable file.
  2. How can I run the command tailscale cert in order to enable HTTPS according to https://tailscale.com/kb/1153/enabling-https/ I have tried running the above command inside the unRAID tailscale docker container, after enabling MagicDNS and HTTPS from inside the settings of my tailscale account but I'm not able to run the command from inside of docker. I'm using the option to click on the running container in UnRAID and open a console and then I try to run: /app # tailscale cert sh: tailscale: not found Do I need to SSH into the container and then run the command? If so how do I SSH into the container? Any help is very much appreciated!
  3. If this is possibly a "Glitch" with unRAID, Can someone please move this topic to the correct place in the unRAID forums for me? I appreciate any insight into this issue. I really enjoy using unRAID, but I want to make sure this is not an issue with my overall computer hardware before I purchase a full license. I'm using an LSI controller in IT Mode but the rest of the hardware is standard ASUS/Intel which I'm pretty sure should be fully supported by unRAID. The LSI card fully supports ZFS as well. I made sure of that before purchasing it for this build. Thank you for any help, and insight anyone can please provide!
  4. I started using unRAID a few months ago and had this same issue on a different USB. I figured it was a corrupt storage controller on the first USB thumbdrive. Today I booted unRAID from a fresh copy on a different 8GB thumbdrive. Totally different brand and I know this thumbdrive works fine because I have used it to store files many times. I'm seeing an extremely high amount of writes to this USB thumbdrive as shown in the screen shot. Something is clearly very wrong here. Could this be two faulty USB drives, or possibly a bug with unRAID? Thanks for the help! WRITES on the USB Boot Device are showing up as 2^64. How is this even possible??? Attaching the Diagnostics export from this specific unRAID system I'm using for testing. Export made when seeing this 2^64 anomaly. UPDATE: This super high number of "WRITES" went away after a full system reboot of unRAID. Could this possibly have occurred if the USB was removed while unRAID is running? If so this seems like a bug with unRAID and maybe my post should be moved to another area of the unRAID forums. However, I did see this same "glitch" "anomaly" or whatever you want to call it, occur with a different unRAID installation on the same system but using a different USB thumbdrive. I assumed it was a corrupt memory controller inside the USB device "thumbdrive" but clearly this could not be occurring twice with two different devices. ryraid-diagnostics-20200715-1922.zip
  5. Okay so it turns out it was just this POS cable: https://www.ebay.com/itm/2x-Mini-10Gbps-SAS-SFF-8086-26Pin-to-4-SATA-7Pin-HDD-Hard-Drive-Splitter-Cable/273565542248?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649 I got a seperate mini-SAS cable this one: https://ebay.us/yIUYKo and now the 2X HGST 8TB SAS drives work perfect! Hope this is helpful to others trying to use ASUS gaming hardware with SAS enterprise hardware. The Art of Server seller on eBay with many helpful YouTube videos helped me walk through diagnosing these strange issues. IMPORTANT advice that may seem very simple but can cause a lot of headache: make sure you are using "forward breakout" cables and not "reverse breakout" cables.
  6. I spoke to The Art of the Server seller I got the card from on eBay and he told me: I will try his fix and report back if that fixes my issues.
  7. When you say LSI BIOS, are you referring to a special LSI BIOS that is separate from the ASUS BIOS for my motherboard? I do not see any option to boot into an LSI BIOS during boot. With a Supermicro mobo I have; there is an option to go into the onboard Intel RAID controller BIOS setup that is separate from the mobo BIOS so I want to ask to be sure. Also, when you say a BIOS update might help; are you referring to the ASUS motherboard or the LSI "Dell H200) controller card? If it is the former, I have applied the latest BIOS update from ASUS for my motherboard recently. Lastly, inside the ASIS BIOS where it has LSI settings, I'm not seeing any drives listed anywhere at all. It only has the LSI firmware info but not a single hard drive listed. What should I try next? I'm sort of at a loss here because I specifically bought this LSI "Dell H200" controller card pre-programmed into IT mode based on several unRAID forum posts suggested that it should just work out of the box. Thanks for all the help! I really hope I can get this working soon!!
  8. Thanks for the fast response! I will go back and try it again with legacy boot enabled in the LSI settings inside my Asus motherboard BIOS. Is there supposed to be some other LSI boot settings I can get into to see if all drives are being detected? When I had legacy boot enabled to begin with no drives connected to the LSI card were present in unRAID at all. That is a good point with the PCIe slots. However it is strange that the NVMe device that is connected to the M.2 slot built into the board is the one that is not present in unRAID. The PCIe one is there and showing up as cache in unRAID. I will see if unplugging both SAS cables from the LSI card brings back my onboard M.2 NVMe device. Is there anything in my unRAID diagnostic logs that stands out? I'm not really sure what to look for since I'm new to unRAID. I really love the ease of use of unRAID and it is going to help me a lot once I get to the point of setting up my High Performance Graphics based Windows 10 VM's. However if I can't get this working soon I will have to resort to my ArchLinux based host OS with KVM Win10 VM and GPU passthrough. I really don't want to have to go back to that as it would work for months but then break from updates. The killer is the amount of setup time involved to get it all working with GPU passthrough on top of Arch.
  9. Hello, I'm new to Unraid but previously had a custom Linux setup with Linux software raid and GPU passthrough etc... I'm having some trouble getting the Dell H200 6Gbps SAS HBA LSI 92108i (92118i) P20 IT Mode ZFS to detect all my drives in unRAID. This is the Dell H200 card I got off eBay https://ebay.us/jxCFbl I currently have 4X Western Digital (WDC) enterprise grade 2TB drives that are SATA plugged into one of the two ports on the LSI controller card. I have two HGST 8TB drives plugged into the other one. I also have two SAS HGST 8TB drives but I'm waiting for a mini SAS cable to arrive on Saturday so I can't use those yet. My plan is to move my previous data off my linux RAID that is on the two 8TB Sata drives to the 4X 2TB drives. This way I can then format the two 8TH HGST SATA drives to ZFS for unRAID. It is about 5TB of data. Once I have the SAS cable and my data moved off the two 8TB Linux RAID I will remove the 4X 2TB WDC drives and just have the 4X 8TB HGST drives. I'm trying to keep everything in my one Cooler Master COSMOS II tower case. My main purpose for this setup will be NAS and Windows 10 VM's with GPU Passthrough for 3D rendering and animation type software that only runs in Windows. I also do software development so having Linux VM's and app containers will be used a lot too. I also have two 1TB NVME I will use for redundant cache. The problem I'm having currently though is that unRAID is only detecting one single 2TB WDC drive out of the 6X drives I have plugged into the LSI (Dell H200 controller card). None of the other drives connected to the LSI card are showing up in unRAID at all. One of my 1TB NVME drives shows up that is plugged into a PCIe slot. However, my onboard NVME that is plugged into the M.2 slot on the motherboard is not showing up now. It was previously showing up in unRAID. Any help would be much appreciated! I have included the diagnostics zip. My motherboard is an Asus Rampage Extreme V. I am able to go into the system BIOS and see the LSI card is present. It shows the BIOS info but the only thing that can be changed in the settings for the LSI card are choosing between "Legacy mode" and "non-Legacy mode" I think but I can go in and take a photo of that option if it is helpful. I changed it to the non-legacy option. Can't remember if that was the name of UEFI but I setup the unRAID USB to have UEFI option available and I'm booting from the USB using the UEFI mode, so this is why I figured it made sense to choose non-legacy option for the LSI card option. Before I did this none of the drives connected to the LSI card were showing up in unRAID. Switching it to the other mode mode that is not legacy made the one WDS 2TB drive show up in unRAID. Note: I do have one Seagate 2TB drive plugged directly into one of the SATA ports on the motherboard and that one shows up fine in unRAID. When I had all the drives plugged into the motherboard onboard SATA ports they did show up in unRAID. It is really strange that my M.2 NMVE is not present in unRAID since installing the LSI card. Any ideas why this would happen? I really need to get this setup ASAP so I can access my data again, but from the unRAID share as I have a project I need to be working on and the latest data is only on those two 8TB SATA drives. I have other copies of the data but the most recent files are on there. Thank you in advance for the help! ry320i-diagnostics-20200409-1248.zip