Dan M

Members
  • Posts

    10
  • Joined

  • Last visited

Dan M's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Putting the SSH key in "Users" in the GUI has worked for me for sshing into the unraid server. But.... what if I need to go the other way around? I want to ssh from my unraid server to another host. Very simply: unraid$ ssh user@my_machine 'ls -al' "my_machine" isn't a problem, it has the public key. But will unraid lose the id_pub files in /root/.ssh on reboot? Even better, I've taken to using a config file my other machine. I can specify hostname by IP Address if so desired, set up alias, specify username, etc. It has really simplified my life. Will /root/.ssh/config survive a reboot? (I know I could try it, but I'm in the middle of building parity - it'll be a while before I can test it.)
  2. I have to network drives on my Windows machine mapped to shares on my Unraid server. I can see them and access file on them just fine: If I go to a guest Ubuntu virtual machine (Oracle VirtualBox), I can see the share and again access them. (I get some permission errors trying to access some files, but not other, but that's another story - I think.) One of the scripts I run has to ssh back to host to run a Windows command. When it ssh's back to the host machine, it no longer has access to the share. If I just ssh to the host and run net use, this is what I see: I can see 192.168.1.254 fine. I can map network drives to it ok. I can re-establish the shares if I Net Use them again: What is going on here? How can I fix this?
  3. Shipped quick, too. I ordered one yesterday and received it today!
  4. Ran across this great deal while looking for a new Cache drive. It may be posted elsewhere, but I couldn't find it. https://www.amazon.com/dp/B07GJV5KF6 $109.99 - Intel 1.92TB 6Gb/s 2.5" SATA TLC Enterprise Server SSD. 7.1 TBW. 5 Year Warranty. I plan on replacing my consumer grade PNY 512GB SSD with this for my cache drive.
  5. Please bear with me and be gentle with me, I'm only a week into Unraid! I've setup an Unraid server with the following... Drives: - 10 Array drives + 1 Parity drive (total of 11 drives in the array. 2nd Parity to be installed later.) All HDDs. - 1 Pool (Cache) drives. SSD. - 1 SMB share pointing to my primary Media Server. (M-Drive) Shares: - appdata - domain - E-Drive - isos - system All are are configured for Cache as primary storage, Array as secondary. All except are marked with a caution: "Some or all files unprotected". Just a guess on my part, but since I have Cache as primary there are some/all files on Cache drive and some on the Array. Since the Array is the only one with Parity, it's the only one protected. All of the shares, according to this caution, have data on both Cache (which is unprotected) and the Array (which is protected). Dockers: - Binhex Plexpass (hostpath2 = /mnt/user/E-Drive, key 1 = /config/transcode) - Hotio qbittorrent (Host Path for /config = /mnt/user/appdata/qbittorrent = /config, E-Drive = /mnt/user/E-Drive = /mnt/e-drive, M-Drive = /mnt/user/M-Drive, /mnt/m-drive) - Hotio sonarr (Host Path for /config = /mnt/user/appdata/sonarr = /config, E-Drive = /mnt/user/E-Drive = /mnt/e-drive, M-Drive = /mnt/user/M-Drive, /mnt/m-drive) Plex, qbittorrent, and sonarr are all working as I expect them to. So what I'm thinking is adding two more drives - 1 SSD and 1 HDD. - The SSD will be used for appdata, but mostly for Plex - Plex is a little slow at times (I've seen this before on an old machine and putting it on an SSD sped it right up). - The torrents don't need be cached and I don't need them to be part of the array, so I'm thinking about adding the HDD as an Unassigned Device and mapping the E-Drive for qbittorrent and sonarr to the new HDD. (Additionally, unless I stop the torrent Dockers, Mover never does anything.) First - does this look like OK setup? Second - How would I go about making these changes. (Like do I keep the new SSD in Unassigned Devices and how do I change Plex and appdata to point to it? And what do I need to copy so Plex doesn't have to rebuild all of the libraries yet again.) TIA!
  6. Bumping an old thread.... I'm wanting to go the 10Gbps route as well. Going Fiber is blazing a new trail for me - lots of technology I'm not familiar with. It's a long way from the old 2.5Mbps ARCNet I used to run. I would like to have 10G NIC in my Unraid server as well as my main Media Server/Workhorse. I would also like both to have access to everything on my 1G LAN and vice versa. Since I don't have a lot of PCIe slots available on my main Media Server (I only have 3, and the GPU with the cooling fan takes up two slots, leaving me with a single useable PCIe X4 slot). The Unraid server has 4 (1 being used) PCIe X16 slots, 1 PCIe X1 slot, and one plain old PCI slot. So this is what I was thinking. 10G NIC in Unraid server (Melllanox MCX353A-QCBT CONNECTX-3) 10G NIC in my Media Server (Melllanox MCX353A-QCBT CONNECTX-3) Both connected to a 10G Switch which also has 1G ports. (10Gtek 5-Port Fast Ethernet Desktop Fiber Switch, with 2 Ports Dual SC Fiber - https://www.amazon.com/gp/product/B0895XB1SD) First, will that work? Second, I understand I'll need some kind of transceiver to slide into the Mellanox cards. And I'll need some cables (thinking 2M) to go from the Mellanox/Transceivers to the switch. What would I need? I suppose, though, it would be less expensive to directly between the two computers at 10G and use the built-in NICs on the motherboard (Realtek) to connect to the 1G network. In that case, I believe it would be easier to wire, but again, what would I need? Transceivers and cable.
  7. This worked great: echo 1 > /sys/block/sdX/device/delete (replace sdX with sda, sdb, sdc, whatever) It wasn't working for me at first, but then a lightbulb went off (on?) in my dim brain and I stopped the array. Bingo.
  8. Hello all, I've got an activation code for Basic that I bought through the website (but I have too many devices, so I'll I need to upgade that Basic code to Pro). But when I put in the activation code in the app, I get a Too Many Devices messages and the system won't let me go further. I can't upgrade via the website, only the app. The whole point was to use the 15% off Summer Sale code for Basic, then upgrade to Pro. (Unfortunately, I'm in a position where I'm always looking for a way to save a buck!) Any ideas?
  9. I'm new to Unraid, so I may be missing a few steps and using some wrong terminology, but here goes... I set up an Unraid server with a 14TB Parity Disk, an 8TB Storage Disk (first in the array), and just because I had it lying around, a 512MB SSD. All connected to SATA 6GB. All pretty simple, but "readying" the Parity Disk was going to take 27 hours. Totally blank/new drives. 27 hours. OK, whatever. While it was building the parity disk, I did see that the 8TB share was available and I could connect to from a Windows 11 machine and write to it. Write performance to the share on a gigabit LAN was a little slower than usual, but acceptable. Time to build the parity went way up (it was up 9 days at one point) when I copied data, but I was just testing the waters. After I was done playing around it coalesced back down to ~27 hours. So, anyways, 27 hours later. My real goal is to copy about 60TB from my backup StableBit DrivePool array (think Windows Storage Spaces, but DrivePool uses standard NTFS drives and you are able to access files from the drives) spanning 10 6TB and 8TB drives to Unraid. Since I don't have another 60TBs laying around, I knew I had to go through a process where I would copy one drive from the DrivePool to Unraid, then add the DrivePool drive to the Unraid array, rinse and repeat 10 times. So I hooked up half of the drives to the SATA ports on my motherboard, all were recognized as Unassigned Devices. I mounted the first drive and used Midnight Commander (MC) to copy the data from /mnt/disks/{drivename} to /mnt/user/{sharename}. It was SLLLLLLOOOOWWW. I expected to take about 10 hours (it about about 5TB), but the MC ETA said 24 hours. I let it run, figuring because it was copying some smaller files first, the ETA would be off. When it got to the meat of the drive, with quite a 1 to 5GB files, the ETA would drop. I went to bed and checked 8 hours later, the ETA was still about 24 hours (it fluctuated down a little, but not much). I figured I'd let it finish while I did some work, ran some errands, etc. Sure enough 24 hours it completed. The average speed was about 40MB/s according to MC. OK, well, I figured, it's a day - I could live with that if I had to, I'm not in a hurry and I don't have to babysit it while it copies. Then I went to add the drive that was copied to Unraid to the Unraid Array and move on to the next one in the array. No, Unraid wanted to "clean" the drive before I could format it and add to the array. How long was cleaning supposed to take? According to the Unraid Main page, about 25 hours. OK, I figured, something wasn't right. 2 full days to copy 5TB of data and move on to the next drive? All SATA connected, no LAN, no USB. No. I was about to give up on Unraid, as much as I like what it has to offer, it would take 20+ days to copy the data. Just no. So I started reading. It looked like the main thing causing the slowness was the Parity. I tried disabling/stopping the Parity drive. Speed from the next drive in the pool to the array (just a few test files) was almost double speed. But Unraid still wouldn't let me add the old NTFS formatted drive to the array without cleaning it first. I tried fdisk and wiping the drive, no go. So I read some more. Apparently removing the parity drive (adding it back to Unassigned Devices) might help. When I was done copying, the parity drive would have to be rebuilt, but it should result in faster adding. Sure enough, removing the parity drive allowed me to format the old pool drive and I was able to add it to the array in just minutes. I'm copying now (again using MC) from the old drive (/mnt/disks/{drivename}) to the array (/mnt/user/{sharename}). The average speed is 132MB/s and the ETA was about 9 hours. When everything is copied, I'll put the 14TB drive back in as parity and let it rebuild - I sure hope it doesn't say 20+ days!!!! So, anyways, yeah, I've got things copying pretty close to the theortical limit of SATA and adding the pool drive after now only takes a few mintues. (Oh yeah, figuring out how turn on "Destructive Mode" and whatever else needs to be done, is not exactly intuitive.) But, does anyone know what I could have (or could do) differently? I don't mind *some* overhead, but 48+ hours for a process that theortically only takes about 9 hours seems like overkill.