Jump to content

noja

Members
  • Content Count

    74
  • Joined

  • Last visited

Community Reputation

8 Neutral

About noja

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just a note to anyone trying to implement DUO 2fa on this docker - I spent a long time failing to implement it as my browser kept saying https://api-xxxxxxx.duosecurity.com%20%20%20 was incorrect. It turns out that if you accidentally leave spaces behind the api url in the Guacamole.properties file, it will actually try to include those spaces and fail the api request (evidenced by the %20%20 in the browser console). Other than that, if anyone is trying to set it up, the entire process was straightforward according to the Guacamole docs and works just fine at the end. I even run this program behind the lsio/letsencrypt reverse proxy. Only difference from the official documentation is that the DUO secret key does not have to be 20 characters as noted, the supplied 40 characters works just fine.
  2. noja

    SAS to SATA Adapter?

    Awesome! Thanks!
  3. noja

    SAS to SATA Adapter?

    Sorry yes, I'm using a SATA controller card. So I understand that no matter what, the SAS drive will have to run through the LSI HBA. I'm just wondering about the necessary cables. Can I keep using the same SFF-8087 to SATA cable and just put an adapter between cable and drive? Or do I need to use SFF-8087 to SFF-8482 for the drives to work?
  4. So I have a couple of new SAS drives coming in for my Unraid build. I have an X8DTL-3F in a Rosewill RSV-L4500 with 12 drives. 4 drives are on a SATA expander and I have an LSI 9211-8i P20 in IT mode which currently is connected to 8 drives via SATA. I know I need to run the new SAS drives through the HBA, but I have a couple of different options and I’m not sure which way to go. Options then: 1 - I can get the SFF-8087 to (4) SFF-8482 cable like these, but then I’d have to connect two existing drives to the SATA expander card which I’d like to move away from. 2 - I can keep the SFF-8087 to (4) SATA connection and get two adapters like these. Is there any real difference between either option? Is one better than the other? Realistically, I’d love to get another HBA, but you can see in the photo that I’ve dropped a fan on the north bridge heatsink. Fun aspect about this mobo is that that bridge will overheat and shut the system down otherwise. Therefore, I don’t know that I really have an ability to put something over that heat sink and still keep a fan there. Thanks for any help!
  5. Hey @CHBMB, just wanted to say thanks for the template. Unraid has been my first docker experience so I've actually had a lot of fun learning basic things like how to take that xml and turn it into a working container. I've finally managed to sort it out and I have openldap and phpldapadmin talking to each other happily. Next step is integrating with the lsio letsencrypt container and eventually SSO. So thanks again!
  6. Ah! You called it! Yeah thanks. I initially left the vdisk bus as the virtio and didn't realize it needed to be something else and/or match the install type that I was using. Thanks for the help!
  7. I'm fairly new to VMs in general, much less Unraid and KVM. Running 6.6.7 and using the Server 2016 template for the vm and I've added 2 separate vdisks of 60 and 100G to see what I can do, but the install can't see it. I've tried the virtio-win-0.1.160-1 and 0.1.164 but I'm not getting any success. Did I miss something easy? Thanks!
  8. Hot damn, that was fast and easy, thanks - I've never been presented with file system errors before. The check even took a total of 6 seconds and minor fixes issued Thanks!
  9. So I'm trying to remove an outdated share and I've tried a number of different methods to remove the last remaining folder in the share, but all of my methods are being stymied due to that "Structure Needs Cleaning". My log is also telling me Apr 4 11:13:13 MANKY-DREADFUL kernel: XFS (md4): Metadata corruption detected at xfs_dinode_verify+0xa3/0x4ce [xfs], inode 0x53e90785 dinode Apr 4 11:13:13 MANKY-DREADFUL kernel: XFS (md4): Unmount and run xfs_repair Apr 4 11:13:13 MANKY-DREADFUL kernel: XFS (md4): First 128 bytes of corrupted metadata buffer: I've attached diagnostics. I know there are a number of hardware issues and I'm just waiting on Amazon to send me my damn CPU coolers so that I can switch boards, but I still have to wait a bit longer. manky-dreadful-diagnostics-20190404-1119.zip
  10. @jonathanm thanks again, transfer last night went perfect with no issues.
  11. Actually, it goes all the way down. And it includes the appdata share and all the files in each appdata folder. It confuses the hell out of me. It also keeps recreating docker containers that I have previously deleted. Thank you!
  12. So I have a new server (DL360e Gen8) that I've been tinkering with for a little while now. There's nothing on the server yet that's important, but I've really managed to screw it up. So I'm wondering, how do I start from scratch with the same usb so I don't have to find a new one and go through the license retrieval process? Additionally, I've save a copy of the diagnostics, so if anyone is ever interested in trying to find out why shares that I have deleted keep reappearing, you're welcome to take a looks!
  13. As always, Spaceinvader One has a video for this. I didn't know about the "one at a time" option, but I think you could always go with his method too if one or the other gets complicated.
  14. Hi! I'm trying to learn about the best way to network between two servers. Currently I have whitebox with tons of storage capacity on an x5675 (Srv1). I also just acquired a DL360e Gen8 that has a bit more processor/ram horsepower (Srv2). I have unRAID successfully running on both servers, cache disks etc. I'm curious though, what are the best practices of utilizing the main storage on Srv1 and running most containers from Srv2 fom both a software and hardware perspective? I currently have NFS shares available from Srv1 which map to Srv2 through Unassigned Devices and docker mapping is set to RW/Slave. Both servers connect through single nics on Cat6 through a layer 3 Nortel/Avaya switch. The issue that makes me ask this question is that when running Airsonic on Srv1, a full library scan is done within like 30 seconds. Airsonic on Srv2, with appdata on a cache ssd takes about 5 min for a library scan with the same number of files. EDIT - Airsonic has the "Fast access mode" for a reason. It greatly sped up the scan time. However, I'd still like to find out if I've set up the best method for connecting between servers. Do I have my setup wrong? Am I missing something? Is there already a write up out there on the best way to connect these two servers? Thanks!
  15. Sorry, haven't looked at your diagnostics as I don't really know what I'm looking for. After @johnnie.black noted that the issue for me was related a USB3 port, I ended up doing maintenance on bad sata cables and I think I switched the connected keyboard & mouse to different usb ports. Since then, I haven't seen the same notification.