Sparta

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by Sparta

  1. One more quick followup to the question about worrying I'll lose data: After I posted, I checked a few other posts and saw mention of people using raidz1? Is that the consensus best way to add redundancy to an unraid pool? One of my current big worries is about losing a drive, and if I could make my array have some protection that would be rad. If I added more drives (like, 8x 14tbs or something) could I then have them raided, move my data over, and then reformat the existing array devices to have redundancy? Or is there a better way?
  2. Current setup: 1x 10TB HGST Parity 7x 10TB HGST Array Devices (xfs) 1x 3TB WD HDD Cache (btrfs) 1x Flash Drive for Boot Device (vfat) All of this is in a Dual Xeon 4u 24bay Supermicro X9DR3-F, running 6.9.0-rc2. I'm over 60tb used of the 70 I have in my array so it's getting time to add new drives. This is scary and I am beginning to have worries over things like drive failures/redundancies, but that's a different question. Over the years I've read conflicting things about cache drives (SSD/HDD) and would like to know the current best practices. I'm planning on adding maybe 6 more large drives, and maybe throw in a couple new drives for the cache. Should I get SSDs? High reliability HDDs? Bigger drives? 1. I know I need to have another cache drive at least, what drive (or drives) would you recommend adding? 2. Should I have a 2nd parity drive? 3. I'm using a single flash drive as the boot device, should I add another or is it fine? 4. Lastly: I worry about losing data. Everything is replaceable, but I know it'll be a pain if a giant drive eats it, anything I can do to allay my worries? BONUS: I've just recently learned about DAS, and there's a kernel of an idea for me to make a dedicated machine with a beefy, modern cpu to handle all the docker/transcoding/compression stuff. My machine currently has dual xeon E5-2680 that were, in their heyday, absolutely beastly, AND it has 256gb ram, but I know a modern AMD chip would probably blow it out of the water. Am I overthinking things?
  3. EDIT: ID-10T error. Right after posting this I checked the manual and found I have more ethernet ports and the ACPI port was a different one. I am an idiot. ================================================== I have an old, beefy Supermicro server (Supermicro X9DR3-F, 4U) that I've had Unraid running on for years. I recently reconfigured my network and have a switch on my rack opening up the ability to bring a second connected to the server. I thought this would allow me to manager the server over the network with ACPI/APM (which the board supports) but I found when I plugged in a second ethernet cable, unraid took it (eth1) as part of a bond with eth0. I do not have a power button for this machine, and the last time I told unraid to shut down (instead of restart), I could only bring it back up again by pulling out/putting back in the CMOS battery. This is not ideal, and I feel I should get the machine in a place where I can manage its power from elsewhere. How do I reconfigure my server to use my second ethernet port (eth1) as an ACPI/APM port? Is this something I can configure via unraid, or am I going to have to hook up a monitor/keyboard and change something in the bios? Thank you, apologies for such a novice networking question. On version 6.9.0-rc2.
  4. Phew, figured it out. I went in to /boot/config/networking.cfg and then went down and found NETMASK[0] and saw it was something like '255.255.255.240' for eth0 and '255.255.255.252' for my two VLANs. That looked wrong, so I deduced that that must be what the /24 -> /28 and /30 settings affected. I changed them to 255.255.255.0 and restarted (shutdown -r now) and it booted up and was fine.
  5. I recently upgraded to 6.4 because I have been trying to get docker to work with a VLAN ip. I had added VLANs fine, but all the ips pointed to Unraid itself, which made running a container on port 80 on one of those VLAN IPs not work, since it complained about the port already being used (by unraid itself). So, I went in to network settings and saw that the IP address for Unraid had /24, and that the routing table all looked like 192.168.xx.0 / 24 -- for all 3 interfaces (br0, br0.201, br0.202 -- the latter two being VLANs). My thought was that maybe Unraid, by virtue of having /24 for its own domain was 'consuming' those VLAN ips, and so I made the real boneheaded move and set it to /28, and set the vlans to /30. First off: yes, I am an idiot. I realize now that what I probably did was tell unraid that that IP exists only on a /28 group and so the bit it is getting are being garbled. Networking is not my strong suit. I have a monitor + keyboard plugged in to the box, and I see the shell prompt up, but I am at a loss as to what I need to do to fix this. Any assistance?
  6. Brand new install, running in to an issue where the container won't start, and the logs are saying: I'm using PIA, and it's set to PIA. I've changed almost nothing from the default config other than my data directory and name/pass. I also added VPN_REMOTE as per the docs. I've deleted the container and tried again x3, including making sure my config directory pointed somewhere new in case that was at issue. I found on PIA they have a zip with all their .opvn files. I added them to the /config/openvpn folder and it managed to get past that error, however I noticed in the logs that it was failing while trying to hit PIA's ip: Deluge started, but I just want to make sure everything is correct, since further up in the logs I saw: which is not what I set VPN_REMOTE to (I set to swiss.privateinternetaccess.com). So summing up: it looks like it connected, but it loaded the Brazil opvn file, then seemed to not be able to connect to PIA's auth, and ignored the config I had set for what server to use. The fact that all these things went wrong but it connected has me worried that it's not actually behind a VPN.
  7. I apologize for a question that I know has been answered -- I just have gotten a bit confused: I am having some slowdown on unraring in nzbget, and I've read that this is because I'm using the array, so an unrar is a read/write to the array, which is slow. The solution appears to be to put it on a cache drive; but then how should I configure Sonarr? /data -> cache, /media -> array? I can't tell if the 'Data' mount is for app data of Sonarr or for the 'read' bit of the importer.
  8. Yep, looks like X9. # dmidecode 3.0 Getting SMBIOS data from sysfs. SMBIOS 2.7 present. Handle 0x0002, DMI type 2, 15 bytes Base Board Information Manufacturer: Supermicro Product Name: X9DR3-F Version: 1234567890 Serial Number: {My Serial Number} Asset Tag: To be filled by O.E.M. Features: Board is a hosting board Board is replaceable Location In Chassis: To be filled by O.E.M. Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Thanks for the quick response -- I'll see if I can fiddle with the fanspeed in the bios and maybe I'll order some sound-dampening foam :]
  9. I've got a new server up and running on a Supermicro X9DRi-F board. The fans are *very* loud, so I installed IPMI and was happy to see that it's grabbed all the sensors and fans and seems to be compatible with my board. However, I can't seem to get fan control to work right. I have it set on, the settings are set to Auto with high of 80c, low of 20c, and speed max/min set to 100 and 1%. All of my drives, cpus, and other sensors are clocking in between 35-43C, but the fans are still running at 3k RPM. Weirdly, a few fans do seem to cycle down to 2775RPM and then back up on occasion, however the fan event log shows nothing at all in IPMI. Has anyone encountered this? Part of me thinks that IPMI isn't able to control the fans, maybe I've set something wrong in my bios? Is there a way to manually turn off a fan from within IPMI to test to see if it actually can manage fan speed?