Jump to content

Juise99

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by Juise99

  1. 16 hours ago, Squid said:

    Probably /config/ident.cfg on the flash drive

     

    use nano to edit /boot/config/ident.cfg, or if you've exported the flash drive over SMB you can do it via the network

    "Use nano", oh you silly kids! :)

     

    Seriously though, thanks! I have no idea where it got that extra 0 from for the ssl port

  2. After updating, nginx wont start because something is creating a port that's invalid. My default port was 5580, so I assume something was added in the version to create ssl port. Without nginx I don't know what to change to fix it.

     

    root@Teletraan-1:~# tail -f /var/log/nginx/error.log
    2023/06/15 16:50:19 [emerg] 7493#7493: invalid port in "127.0.0.1:554430" of the "listen" directive in /etc/nginx/conf.d/servers.conf:23

     

    less /etc/nginx/conf.d/servers.conf
    
    server {
        listen 127.0.0.1:5580; # lo
        listen 127.0.0.1:554430; # lo
        listen [::1]:5580; # lo
        listen [::1]:554430; # lo

     

    So the problem is 554430 is out of range obviously. If I edit the server.conf and try to start nginx these lines are added in again. Where is this coming from and how do I change it without having web access?

  3. OK, not sure whats going on here. I've been using Djoss's container for about 6 months no problems. I played around with a wildcard cert and it blew up. Couldn't change entries, errors deleting existing entries, just all bad. I went the lazy route and setup your container and everything worked for about 2 days. Now I haven't touched anything and I'm having the same issues as the other container.

     

    What logs should I be looking at?

     

    basic setup:

    home.mydomain.com is an A Record that is updated with an app if my IP changes. allmyotherstuff.mydomain.com are C names pointing to home.mydomain.com

    pfsense router with rules for 80 and 443 to go the reverse proxy

     

    I have entries for both public and local access

    everything worked!

     

    Now if I try to regenerate a new ssl cert for an existing entry I get

    "ENOENT: no such file or directory, open '/data/nginx/proxy_host/7.conf'"

     

     

     

  4. Two things, when you install the current docker via Community Applications on 6.9.2 it doesn't show as an installed app. Also when you change the port during install the webui link in "docker containers" still points to the default 8080.

     

    Thanks for your work on this!

  5. On 1/24/2021 at 3:31 AM, JorgeB said:

    The way it drops looks like a device problem, if you enable turbo write and transfer directly to the array is it the same or better?

    Turbo write is enabled, similar results when writing directly to the array (just a much lower initial burst). Devices also test fine too. Top chart are the mechanical drives, bottom is the cache SSD.

    benchmark-speeds.png

    benchmark-speeds cache.png

  6. Initial transfer starts at about 4gb and immediately tanks. I know there's an amount of RAM caching, and HW Buffering that happens in windows and the hardware for a file transfer. However the transfer should balance out at the continuous write speed of the cache drive (Samsung 840 Pro), or continuous read speed of the source (PERC H700 RAID 5) whichever is lower. The hardware should easily do a sustained 2.5gb transfer and as you can see that's not happening.

    transfer.png

  7. 1 minute ago, johnnie.black said:

    Disk3 had a corrupt filesystem, just needed a filesystem check, instead you formatted disk3 deleting all data, I guess the new warning didn't do much good:

     

    377670907_Formatwarningnewv6.8.png.5285da2e1d9a9ffd9e0f9195a8e0e79d.png

     

     

    I still have the original disk3 so I will look into the filesystem check. It's very possible I missed the warning, just a had a newborn and havent slept more then 4 hours straight in 2 weeks.

     

  8. Drive died, I checked the array found missing directories and subsequent files. Powered off unraid. Got a replacement drive removed the old drive installed the new drive began rebuild. Rebuild finished this morning found the missing folders but they do not contain any files. Although the rebuild is complete the replacement drive only shows 30gb in use, previous drive was at roughly 2.8tb.

    teletraan-1-diagnostics-20200114-1016.zip

  9. Depends on how many points of entry you want to your network. Things like 32400 for Plex are just a way for traffic to flow directly between the servers. Since Plex isn't providing any general access to your server on that port (like a login) it's generally considered safe. Opening SSH to the world used to be considered generally safe because it's a secure encrypted protocol from start to finish. The generally accepted thinking these days is that there's no obscurity in that. If someone manages to obtain your login info, they know that with SSH they will generally land on a Linux box with at least user level credentials. VPN in a pure point to point aspect isn't any more secure than SSH, but it gives your network a level of obscurity, and a second level of credential protection. So if someone gets your VPN credentials they only land on your network. From there they still have to find your server (pretty easy with nmap), and obtain the credentials to login to your server. And since your VPN credentials are different than your ssh credentials :) it's harder to gain access to your data.

     

    I suspect you can safely disable ssh root login in UnRAID. That way root is available locally just not over SSH.

  10. So I've decided to stick with UnRAID and picked up a Pro license. My "array" consist of 7 4TB drives (2 for parity) with a 512GB cache SSD. I picked up 2 12TB drives and would like to add them. My understanding is my parity drives have to both be as big as the largest drive in the array. So in order to use 1 of the 12TB drives for data I will have to go down to 1 drive for parity. What is the best way to do this or can I mix my parity drives as long as one of them matches the largest drive in the array?

     

    Does UnRAID let you build volumes like FlexRAID did? With FlexRAID you could take 3 4tb drives and make a 12tb volume to use in the array.

  11. 1 hour ago, John_M said:

    That's a nasty combination of a buggy chip and a port multiplier, all on a single PCIe lane. It will cause you a lot of problems as they are known to drop disks at random times. Either get an LSI-based SAS controller (which will need a x8 slot) or use all the motherboard SATA ports and buy an ASMedia 1061 or 1062-based dual port SATA controller for the extras.

    I will keep that in mind if I have problems down the road. I chose the Marvel 88SE9215 because it was listed as working in the hardware compatibility wiki. The Asus ROG STRIX B450 mobo has 6 SATA 3 ports which are all in use, my system has 11 drives in total so I needed something that worked. The previous 2 controllers I had wouldn't even make it through building the array. One of which was an 8 port ASM1806 & ASM1061 combo that wouldn't even allow the system to boot.

     

    I would like to thank everyone for their help/input! Things seem back to normal now, hopefully I don't have this issue again.

  12. So I moved the docker & libvirt img files in /mnt/disk1/system/ to /mnt/cache/system

    I moved everything (except for dockerMan & dynamix) in /boot/config/plugins to /boot/config/plugins-removed

    I renamed /mnt/cache/appdata/PlexMediaServer/ Library to library.old

     

    Things appear to be back to normal. I'll just start over and re-install everything docker and plugin wise.

     

    I re-installed the Plex docker. I had to re-add my Plex libraries, because they wouldn't load (hence my renaming the library directory).

     

    Anything I missed?

     

  13. 3 hours ago, trurl said:

    1) I don't see anything obvious, except your system share has files on the array instead of cache where they belong. Possibly you created docker and/or libvirt image before you added cache so they got created on the array. You don't mention any VMs, do you have any?

     

    2) Docker image isn't full now but maybe you overfilled it in the past, so I guess it's possible you have docker image corruption, but syslog doesn't have much after the reboot so can't really tell anything from that.

     

    3) Didn't take the time to look at SMART for all of your disks. Are you getting any SMART warnings on the Dashboard?

     

    You might delete docker image and recreate it so it will be on cache. Apps - Previous Apps will add your dockers back just as they were.

     

    I'm not familiar with some of those plugins but CA and UD should be fine. Maybe try running without the others for a while.

     

    Setup Syslog Server so you can retain syslogs after rebooting and maybe we can tell more if you continue to have problems.

    1) I switched my cache from from 2 240's to a single 512GB about a week ago possible those files got created then. No VM's

     

    2) With just plex and tautalli that's highly unlikely

     

    3) SMART looks good for all the drives on the system.

     

    How do I remove everything (dockers, plugins) from normal boot environment while in safe mode?

  14. So I'm new to UNRAID, but not NAS, Linux, FIle System RAID, or any of the other underlying parts that make this work. I'm Giving UNRAID a shot because S2D has been lack luster performance wise to say the least. Fresh hardware R3 3200G, Asus ROG STRIX B450 mobo, 8GB RAM, 2 port 10-Gigabit Intel X540, Marvel 88SE9215 6 port SATA controller, 7 4tb drives and a 512GB SSD Cache.

     

    Everything has been up running great for 15 days. Fresh install of 6.7.2, 1 SMB Share, Community applications plugin, Plex Docker (Plexinc), tautalli Docker (linuxserver), Disk Location (Ole-Henrik Jakobson), Preclear Disk (gfjardmin), Unassigned Devices (dlandon), and SNMP (KZ) all installed & updated. About two hours ago I added 6 movies to Plex via Radarr (which runs on anther box), once those finished I manually kicked off the mover, and updated my Plex library. Then BOOM! Plex goes down, I restart the docker but it cant load any media. Reboot UNRAID, now I can't even start the array because the unassigned devices plugin wont stop refreshing. Restart again, no webui (locally or remote). So now if I reboot into safe mode the array starts, smb works, the Plex docker starts but has no access to my media (host path is there and you can see the media in the docker cli). So now, how can I use safe mode to wipe everything short of the array and start over? Or is there something else I should be looking at?

×
×
  • Create New...