Badams

Members
  • Posts

    16
  • Joined

  • Last visited

Badams's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I absolutely love Unraid. I love how easy it is to use, and also, migrate servers. On Christmas morning I woke up to my storage server being completely dead (motherboard failure) and had a lot of family frantically wanting to watch Christmas movies. Luckily for me, as I was using unraid, all I needed to do was move the hard drives from my dead server to my new server, unplug the USB, plug the USB into the new server and start it. I was then able to recreate my media server and it was away. Total resolution time was about 15 minutes after I started the work. I would absolutely love for iSCSI to be made available natively (without the use of VM’s)
  2. You know what? Now that you say that it makes perfect sense! Duh! Thank you @Squid Here I was thinking it was a bug. 😳
  3. I've been having some issues for a while now when it comes to port mappings for Docker. Not with any one container, but usually most of them and all different containers. I've found that Unraid isn't using the -p parameter at all, regardless of it existing withing the UI itself. As you can see below screenshot, I have the host set to port 8888, however, in the command that is run it is not using the -p parameter at all. /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='H5AI' --net='br1' --ip='192.168.1.243' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Australia/Brisbane" -e HOST_OS="Unraid" -e 'TCP_PORT_80'='8888' -v '/mnt/user/isos/':'/var/www':'rw' -v '/mnt/user/appdata/H5AI':'/config':'rw' 'smdion/docker-h5ai' Below is also the Template for this container, which looks fine imho <?xml version="1.0"?> <Container version="2"> <Name>H5AI</Name> <Repository>smdion/docker-h5ai</Repository> <Registry>https://registry.hub.docker.com/u/smdion/docker-h5ai/</Registry> <Network>br1</Network> <MyIP>192.168.1.243</MyIP> <Shell>sh</Shell> <Privileged>false</Privileged> <Support>http://lime-technology.com/forum/index.php?topic=34009.0</Support> <Project/> <Overview> H5AI is a modern web server index. This docker image makes it trivially easy to spin up a webserver and start sharing your files through the web.[br][br] [b][span style='color: #E80000;']Directions:[/span][/b][br] [b]/config[/b] : this path is used to store the configuration H5AI.[br] [b]/var/www[/b] : this path is that will be shared via a web index by H5AI.[br] </Overview> <Category>Cloud: Network:Web</Category> <WebUI/> <TemplateURL>https://raw.githubusercontent.com/smdion/docker-containers/templates/smdion-repo/H5AI.xml</TemplateURL> <Icon>http://i.imgur.com/SxqiOrd.png</Icon> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1574299393</DateInstalled> <DonateText/> <DonateLink/> <Description> H5AI is a modern web server index. This docker image makes it trivially easy to spin up a webserver and start sharing your files through the web.[br][br] [b][span style='color: #E80000;']Directions:[/span][/b][br] [b]/config[/b] : this path is used to store the configuration H5AI.[br] [b]/var/www[/b] : this path is that will be shared via a web index by H5AI.[br] </Description> <Networking> <Mode>br1</Mode> <Publish> <Port> <HostPort>8888</HostPort> <ContainerPort>80</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data> <Volume> <HostDir>/mnt/user/isos/</HostDir> <ContainerDir>/var/www</ContainerDir> <Mode>rw</Mode> </Volume> <Volume> <HostDir>/mnt/user/appdata/H5AI</HostDir> <ContainerDir>/config</ContainerDir> <Mode>rw</Mode> </Volume> </Data> <Environment/> <Labels/> <Config Name="Host Port 1" Target="80" Default="8888" Mode="tcp" Description="Container Port: 80" Type="Port" Display="always" Required="true" Mask="false">8888</Config> <Config Name="Host Path 2" Target="/var/www" Default="" Mode="rw" Description="Container Path: /var/www" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/isos/</Config> <Config Name="AppData Config Path" Target="/config" Default="/mnt/user/appdata/H5AI" Mode="rw" Description="Container Path: /config" Type="Path" Display="advanced-hide" Required="true" Mask="false">/mnt/user/appdata/H5AI</Config> </Container>
  4. Alright, so I did have 6.7.3rc2 installed and still had the same issues... So, I've rolled back to 6.6.7 How can I help? What would you like me to do?
  5. I'm wanting to throw my hand (and server) into helping with this but am a little confused on what I should start with. lol I've been having the corruption issues for some time now, but seems to be only affecting Radarr (it was affecting Sonarr also, but that stopped when I went to binhex's image..) Radarr it happens after a matter of hours on both linuxserver and binhex's. I've tried a whole new setup of Radarr and the same thing happens. I want to help resolve this as it's quite painful; and I have some free time on my hands.
  6. Heya Lovely People I'm trying to pass through my NIC to my pfsense VM and am having some issues. The VM see's the nic, and when I put the specific switchport as access for a VLAN, it works fine with network traffic all OK; however, if I set the switchport to trunk, it doesn't pick up the VLAN's in the VM at all... I'm kinda grasping at straws at this stage, so any help would be appreciated. I've taken a look at the logs for the VM and I'm seeing these errors: 2019-05-30T22:34:59.710617Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:09:00.0 Device option ROM contents are probably invalid (check dmesg). Skip option ROM probe with rombar=0, or load from file with romfile= 2019-05-30T22:34:59.717422Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:09:00.1 Device option ROM contents are probably invalid (check dmesg). Skip option ROM probe with rombar=0, or load from file with romfile= This is the excerpt in my VM file for the actual NIC: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> and the related lspci excerpts 09:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 09:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 09:00.0 0200: 8086:10c9 (rev 01) 09:00.1 0200: 8086:10c9 (rev 01) And ofcourse, the addition to my syslinux.cfg label Unraid OS menu default kernel /bzimage append pci-stub.ids=8086:10c9 initrd=/bzroot Thanks heaps!
  7. Thanks heaps for all the replies guys. Judging by the conversation, would it be better to change that controller card to something else? Would this be ok to use? https://www.ebay.com.au/itm/New-LSI-Internal-SAS-SATA-9211-8i-6Gbps-8-Ports-HBA-PCI-E-RAID-Controller-Card/131968907370?epid=628080266&amp;hash=item1eb9f5b86a:g:q1MAAOSwZJBX~v9z
  8. I'm looking at getting a new Server to act as a SAN as I'm having issues with the number of drives I'm able to have on my current server (lack of space and in-compatible controller card). The server I'm looking at getting second hand has a Tyan S7012 Motherboard and a AMCC 9650se-12ML Controller card. Are there any possible known issues with this configuration at the moment? Thanks heaps.
  9. Oh Crud... Well I guess that makes sense. ? Is there a way to disable IOMMU?
  10. This morning I had an Epiphany. Lets forget about those SMART status's for now... I know that it CAN show that a disk is dying, but I've had the same error for 8 months and haven't had an issue. I'm not saying with 100% definitive proof that this isn't the problem, but I'm also not saying it is... These two failing disks we will call sdg and sde. After putting a newish disk into the server, and trying to run a pre-clear and then rebuild with both not finishing (even with it just sitting there getting ready) I notice that the temp for the drive became a * and then that got me thinking. I checked the SMART status for the newish drive (lets call this disk sdf) and it's fine. Not one SMART error. So why is this one having an issue too? Surely I can't have 3 drives failing at THE EXACT SAME TIME with varying ages/brands. Ok, so my Motherboard only has 4 SATA ports (Gigabyte AB-350N), which means I had to purchase an expansion card (https://www.umart.com.au/Skymaster-PCIe-4-Ports-SATA-III-6G-Card_34030G.html) to get more than 4 drives... The epiphany I had was, 'what if the 'failing' drives are all plugged into the expansion card?'. So I set out to have a look into how I can distinguish that without opening the server up (I'm 70 km away at work from it at the moment and wanted to check my theory). So this is what I did. Step 1. Take note of the failing drives. In my case it was sdg, sde and sdf. Step 2. Run the following command to get a list of the current HDD's ls -alt /sys/block/sd* Step 3. From the output, take a look at the values in the 4th lot of /'s for your drives. In my case it is 0000:09:00.0 and 0000:01:00.1 (see below) lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sda -> ../devices/pci0000:00/0000:00:07.1/0000:0a:00.3/usb4/4-3/4-3:1.0/host0/target0:0:0/0:0:0:0/block/sda/ lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sdb -> ../devices/pci0000:00/0000:00:01.3/0000:01:00.1/ata1/host1/target1:0:0/1:0:0:0/block/sdb/ lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sdc -> ../devices/pci0000:00/0000:00:01.3/0000:01:00.1/ata2/host2/target2:0:0/2:0:0:0/block/sdc/ lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sdd -> ../devices/pci0000:00/0000:00:01.3/0000:01:00.1/ata6/host6/target6:0:0/6:0:0:0/block/sdd/ lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sde -> ../devices/pci0000:00/0000:00:03.1/0000:09:00.0/ata9/host9/target9:0:0/9:0:0:0/block/sde/ lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sdf -> ../devices/pci0000:00/0000:00:03.1/0000:09:00.0/ata10/host10/target10:0:0/10:0:0:0/block/sdf/ lrwxrwxrwx 1 root root 0 May 17 08:11 /sys/block/sdg -> ../devices/pci0000:00/0000:00:03.1/0000:09:00.0/ata11/host11/target11:0:0/11:0:0:0/block/sdg/ This shows what the drives are connected to. Step 4.a Now, take those values and drop the first set of 0's before the colon. For me this would be 09:00.0 and 01:00.1. Step 4.b Run the following command (substituting the values you took in step 4.a) lspci | grep 09:00 lspci | grep 01:00 And I got the following result: 09:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11) 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset SATA Controller (rev 02) So, the drives with the issues are connected to the PCIe Sata Expansion card. Would I be safe to assume that the issue is possibly with the expansion card and not the drives themselves? When I get home, I will be moving ALL the drives to the SATA ports on the Mobo to get it back and running. If I purchase a new expansion card (I'm looking at this https://www.startech.com/au/Cards-Adapters/HDD-Controllers/SATA-Cards/4-Port-PCI-Express-SATA-6Gbps-RAID-Controller-Card~PEXSAT34RH), should I be able to just plug it in and away it goes? Or will I need to do some config changes with the OS. Thanks heaps for your help so far too btw guys.
  11. I do have alerts enabled, but the SMART issues that appeared were there well before I put the drives into my new server which I had only moved across to UnRaid with the new server. This was 8 months ago. It has been fine ever since. But yes, I should have acted earlier but with a baby on the way I couldn't exactly get any new drives. :-( This was a massive heat spike that happened on a heatwave day we had about 7 months ago when the air conditioning was dead, it didn't stay that high for long before I acted and jerry-rigged some massive cooling for it. --- Moving forward, I have a plan and need to know its likelihood to succeed/if its even possible. I purchase a 8TB WD Red drive and put this into the array for use WITHOUT parity protection (is this even possible with a current array setup)? I move data from the current drives and/or backup to the 8 TB drive I purchase a second 8TB WD Red drive and this becomes the new Parity I purchase a third 8TB WD Red drive for future storage. Thanks heaps for your advice so far guys.
  12. I do, luckily, but it is a little dated but wont be hard to re-download what I need to. What makes you think that the other disks are dead? Was it something in the logs?
  13. Hey All, I think I might be on the event-horizon of a massive hard drive failure (for me anyway). My UnRaid box has 4 3TB Hdd's in it (1 Parity) and since the weekend I've been seeing disks 2 and 3 acting strange, almost as though they just decide to go AWOL. I got an alert last night (after starting a parity scan) that the array has 2 disks with read errors (while the parity scan was running). The Parity scan finished with 514,358,631 errors, similar to the result of another scan I did with 3926338 where I then stopped the scan and rebooted the server. When I reboot the server, the 2 drives that were previously showing as no file system and/or no temperature on the SMART status were re-connected and working fine for another 12 or so hours. WHATS CHANGED WHEN THE ERRORS STARTED: Over the weekend I added a spare HDD I had laying around from an old computer that I was no longer using into the server. This drive is a WD Green drive. The issues appeared to have then started when I was trying to run a clear on it (NOTE: I would have done a pre-clear using Unassigned Devices but it wasn't letting me. The bubble was showing grey?). I decided to then remove the drive from the array until I could get to the server and physically pull the drive. Removing it from the server didn't stop the issues however as I then got errors (the scan with the 514,358,631 errors) on the most recent scan. The drive has now been physically removed but I have put the array into maintenance mode to stop it from being mounted. I'm not sure as to how to proceed and am open to any/all suggestions. Please find attached the two diag dumps. One from when the WD drive was in, and another from when it was removed and rebooted. AfterWDRem_blackbox-diagnostics-20180515-1044.zip BeforeWDRem_blackbox-diagnostics-20180515-1028.zip
  14. Looks like I'm having the same issue (from what I can gauge) as Stardestroyer12. (let me know if I should create a new thread, I don't mean to hijack this one) I'm running a Ryzen 1600 and the mobo is a gigabyte - ga-ab350n. I've just updated to 6.4 from the post you suggested, but I've had it lock up again. I can still get into ssh but the website will not load (I'm getting a 504 Gateway Time-out where as before I was getting nothing at all.) at the time of it happening, I was removing some files and also copying some files (using rsync) from a network location onto one of my shares. When I get into ssh, i can run TOP it looks like 'unraidd' is using about 40% CPU but nothing else sticks out as using alot of utilization. I've tried issuing the 'Reboot' command (also powerdown -r) but the system never actually reboots. ssh also continues to work and accept new connections also. I have no idea what to try or what to do (I've spent days trying to find/figure any more out), as the only thing that's fixing it at the moment is a hard reboot.