panospatacos

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by panospatacos

  1. True, the SAS drives have a slightly different connector than SATA. In most cases a SATA can fit into a SAS connector, but not the other way around. I am using this drive cage for my disks: https://www.amazon.com/gp/product/B00DGZ42SM/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 The odd thing for me is that the drive populates in the list of drives and only dies when I try to add it to the array. I am using this controller card for the interface: https://www.amazon.com/Supermicro-AOC-SAS2LP-MV8-8-Channel-Adapter-Channel/dp/B005B0Z2I4/ref=sr_1_2?dchild=1&keywords=supermicro+sas+card&qid=1611928253&s=electronics&sr=1-2 As for more information, this is the 3rd SAS drive I have tried with this. I am starting to wonder if it is my controller card
  2. Hey All, I am trying to install a new SAS HDD into my array (not my first SAS drive but most are SATA). When I reboot the unraid, I can see it as an unallocated blue square drive. I stop the array, bring up disk 22 (the next open slot), pull down to the drive. Things go gray for a bit and then it comes back with no device in slot 22 and no unallocated drive. In the syslog I get buffer i/o error on dev sdk. I've attached the diagnostics and syslog. Thanks ahead of time Panos syslog servme-diagnostics-20210128-1817.zip
  3. Yeah, that is what I thought. So I pulled the IP for github and added it for docker.io in the /etc/hosts. It didn't take. I would assume it didn't like the host header of docker.io going to github. So I did a little experimenting. I had my network adapter DNS order set to 192.168.0.1 --> 8.8.8.8 --> 8.8.4.4. I changed the order to 8.8.4.4 --> 8.8.8.8 --> 192.168.1.1 (I updated my local subnet) Now everything comes up fine. I am not even a novice at Docker so I don't understand why this worked, but now everything is functioning perfectly. I assume that it tries querying and maybe only uses 1 of the DNS entries and when the Docker query doesn't respond it defaults to docker.io as the domain. I splunk my local network traffic so I will comb through that to see if anything jumps out and post it here. But for now I am back in business.
  4. I have the community applications plugin installed. I uninstalled and reinstalled (2016.12.31). I deleted the docker.img file. I tried to rerun any docker install from container or apps tab and I am still getting the docker.io domain call and the same error.
  5. For me Docker is the preferred method. It is really nice to have the whole app containerized with the ability to map local paths. I have also noticed that Plex itself just had a big push for its docker image so there is for sure some big pushes coming from there. I used to run it as a plug in and it worked fine. I ran into some minor problems when I had to move my library around. Docker (at the time) would have made it easier.
  6. Hey All. TLDR: Every image I try to pull down for docker seems to only go to docker.io and I can't seem to find a way to change to any other domain. I was recently having trouble downloading updates for my docker images. Docker would say there is an update available but when I tried to download, it wouldn't get very far and come up with the following error: Error: Tag latest not found in repository docker.io/binhex/arch-plexpass I have several template repositories saved on Docker and none of them seem to be called when downloading the docker images. For instance, I added the https://github.com/binhex/docker-templates to the template repositories. When I click on add container, I can see all of the great templates there, but when I try to install any of them, it only calls docker.io not github, or any other repository. When I trace the calls to docker.io, I can see a 301 redirect that I believe docker is not following, and even if it did, it would go to docker.com and the images are not there. I have been banging my head on this one for a while and can't seem to find any setting or config where docker.io is listed. Today I stood up a new docker.img to see if that would fix the issue, but it seems to be doing the same thing. Can someone point me in the right direction? Thanks panospatacos
  7. As was mentioned before, the only way to get a true static IP is from your ISP. The usually have blocks of IPs that can be allocated for static. In my case it cost an extra $5 per month for that static IP. If you want a service to be accessible externally from your network, and you are planning on putting a domain on that static IP, you could use a dynamic DNS service as mentioned. That just means it will change the location of your domain (A record) to what ever ip you have on your router. Its usually only about a 15-30 min lag time. If you are running a minecraft server (port 25565), you can have your router forward all traffic on that port to your minecraft server. http://minecraft.gamepedia.com/Tutorials/Setting_up_a_server#Port_forwarding
  8. The steps I used to create this fun situation: After I stopped the array and shutdown the server, I replaced the 2TB drive with the 4TB, and started the server back up When the interface came back up, I clicked on Main, and it had the list of my drives with Disk 3 missing. The array was stopped. I clicked on the pull down and selected my new 4TB drive for the missing Disk 3. I then click on the link for Disk 3 and changed the File System Type to btrfs, clicked Apply, and then clicked Done. This took me back to the list of drives with the new drive selected for Disk 3 I then started the array. The drive came up with "Unmountable" where it usually says the disk space. Below was the Format button with the new drive SN by it. I formatted the drive, and right after the format it began a rebuild (I thought). It looked exactly like the rebuilds I have done in the past under the riserfs with the exception that the diskspace was not adjusting as it usually does. The process finished up in about 10 hours with a lot of writes, but no data written to the drive. At this point I am looking for some next steps. I still have the old drive, so I could hook that up to my linux machine and copy the contents directly to the new Disk 3 (not to the share as a whole, I hear that is a bad thing), or should I reformat the current disk 3 and put riserfs back on. Would that cause a restore of the previous data? I run 2 NASs so I am not really worried about losing the data, so I can try several things.
  9. Hey all, TLDR: upgraded drive from 2Tb-4Tb, formatted 4Tb btrfs, after 12 hour parity restore, no data. All, I replaced a 2 Tb drive 2 days ago with a 4Tb. I am still running most of my drives as riserfs, but I decided to go to something a little different for my replacement drive. I stopped the array, shutdown the server, replaced the drive, started up, before starting the array, I formatted the 4Tb to btrfs, added to the array and rebuild the drive. It took aprox 12 hours for the restore. I noticed that the drive had 32,854 Reads and 10,399,343 writes. I assume with that amount of writes that it restored, but It is showing as 4Tb free, and browsing to the drive gives me only 1 file that was added last night. I don't see (as of yet) that any of my data is missing, but this seems really odd to restore a drive and have nothing restore to it. I still have the old 2Tb, I guess I can plug the old drive into another machine and compare the files to see if any went missing. This just seemed strange enough that someone has probably seen it. Any ideas?
  10. I recently picked up an MSI X99S GAMING 7 motherboard. Its equipped with a Killer Network adapter. As I loaded up unraid on this new motherboard, it would not load the network adapter. Is there any way to get drivers that would load this adapter?
  11. ok Update. The drive rebuild finished even though the UI died. I tailed the syslog to see when the rebuild was done. I am seeing that the previously mentioned HDD (sdb) is showing errors. I am going to search this on the forums, but for this question, I think I can call this thread done.
  12. Update. Through the commandline I have been able to pin this down to 1 drive (Drive 3 ST3000DM001-9YN166_W1F0XFZW (sdb)). Its producing a lot of errors from the WEBUI. I don't think I am going to do anything as I still have another drive rebuilding and its going to be another 20 hours, but its progressing, but I have a feeling that this other drive is going to fail as soon as the rebuild is done. I will update this thread after that has completed
  13. These might be symptoms of a bigger problem, but I frequently lose the ability to copy to my unraid using explorer in windows. Because of this I have opted to use FTP. While I was cleaning up some older empty folders during the rebuild process, and noticed they were not deleting. I then went to the commandline and tried to delete the unused folders there, and found that the file systems they were on would respond read-only when I was trying to delete the folders. See attached screen shot. Syslog is full of the following: Jun 24 19:00:17 Server kernel: ata10.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Jun 24 19:00:17 Server kernel: ata10.00: failed command: IDENTIFY DEVICE Jun 24 19:00:17 Server kernel: ata10.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in Jun 24 19:00:17 Server kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jun 24 19:00:17 Server kernel: ata10.00: status: { DRDY } Jun 24 19:00:17 Server kernel: ata10: hard resetting link Jun 24 19:00:17 Server kernel: ata10: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Jun 24 19:00:17 Server kernel: ata10.00: configured for UDMA/33 Jun 24 19:00:17 Server kernel: ata10: EH complete I can attach more of the syslog if needed. Thanks
  14. I am going through the process of rebuilding a 2Tb drive in my unraid. I am in need of restarting the server because some of the working drives have gone to Read/Only file systems. Before I do this, can anyone tell me if the rebuild process will keep going from where it is, or will it restart and take another couple of days? Thanks