Jump to content

CrunchyToast

Members
  • Posts

    36
  • Joined

  • Last visited

Posts posted by CrunchyToast

  1. Looks like issues are still ongoing. The only hardware that is the same are the physical disks in the array. Everything else (including the cache and USB boot drives) have been replaced.

     

    Still nothing in the logs either. Now what seems to be happening is everything stops responding (shares, docker, vms and most of the web interface). If I go to my server IP, I can still login, however, only the header loads with navigation links. The rest of the page is blank. I can navigate to other pages, but nothing ever loads beyond the header. If I try to connect via SSH, it responds super slow. If I issue a reboot command, the system begins the shutdown process, but stops and just hangs. I have to manually power cycle the system. When it powers back on, it will boot to the CLI login screen, but the web GUI and all other services are unreachable. It takes about 2 or 3 power cycles to get things going again.

     

    After swapping out all the hardware, I got 14 days of uptime prior to the crash. Now it's happened twice in the last 48 hours.

     

    What could possibly be going on here?

  2. My syslog has appeared to fill up pretty quickly. Tons of BTRFS errors now, but no crash since removing the trim plugin. Rebooting due to this message and seeing what happens.

     

    Jun 24 05:43:32 Vault kernel: Fixing recursive fault but reboot is needed!
    

     

    Just a snippet of the attached syslog

    Jun 24 06:40:43 Vault kernel: 	item 170 key (216726 108 823296) itemoff 6748 itemsize 53
    Jun 24 06:40:43 Vault kernel: 		extent data disk bytenr 177476538368 nr 8192
    Jun 24 06:40:43 Vault kernel: 		extent data offset 0 nr 8192 ram 8192
    Jun 24 06:40:43 Vault kernel: BTRFS error (device sdg1): block=176288710656 write time tree block corruption detected
    Jun 24 06:40:43 Vault kernel: BTRFS: error (device sdg1) in btrfs_commit_transaction:2377: errno=-5 IO failure (Error while writing out transaction)
    Jun 24 06:40:43 Vault kernel: BTRFS info (device sdg1): forced readonly
    Jun 24 06:40:43 Vault kernel: BTRFS warning (device sdg1): Skipping commit of aborted transaction.
    Jun 24 06:40:43 Vault kernel: BTRFS: error (device sdg1) in cleanup_transaction:1942: errno=-5 IO failure
    Jun 24 06:41:01 Vault kernel: blk_update_request: I/O error, dev loop2, sector 6787184 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 0
    Jun 24 06:41:01 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:01 Vault kernel: blk_update_request: I/O error, dev loop2, sector 21056496 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 0
    Jun 24 06:41:01 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: blk_update_request: I/O error, dev loop2, sector 19397120 op 0x1:(WRITE) flags 0x5800 phys_seg 118 prio class 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: blk_update_request: I/O error, dev loop2, sector 19398144 op 0x1:(WRITE) flags 0x1800 phys_seg 68 prio class 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0
    Jun 24 06:41:05 Vault kernel: blk_update_request: I/O error, dev loop2, sector 19921408 op 0x1:(WRITE) flags 0x1800 phys_seg 118 prio class 0
    Jun 24 06:41:05 Vault kernel: blk_update_request: I/O error, dev loop2, sector 19922432 op 0x1:(WRITE) flags 0x1800 phys_seg 68 prio class 0
    Jun 24 06:41:05 Vault kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2377: errno=-5 IO failure (Error while writing out transaction)
    Jun 24 06:41:05 Vault kernel: BTRFS info (device loop2): forced readonly
    Jun 24 06:41:05 Vault kernel: BTRFS warning (device loop2): Skipping commit of aborted transaction.
    Jun 24 06:41:05 Vault kernel: BTRFS: error (device loop2) in cleanup_transaction:1942: errno=-5 IO failure
    Jun 24 06:41:32 Vault kernel: blk_update_request: I/O error, dev loop2, sector 17435296 op 0x1:(WRITE) flags 0x100000 phys_seg 3 prio class 0
    Jun 24 06:41:32 Vault kernel: btrfs_dev_stat_print_on_error: 18 callbacks suppressed

     

    vault-syslog-20210624-2324.zip

  3. Hey all -

     

    I've been trying to figure out why my server has been randomly crashing ever since a power outage last week. My UPS didn't stay online long enough for the server to power down gracefully. It's been super intermittent on when it happens. When it first occurred, it lost one of my cache drives. I was able to add it back and scan everything and there were no errors. I did have to run `xfs_repair` on a couple of drives in the array which fixed the error reported in the unRAID GUI.

     

    I started recording the syslog and the one below is the last one running when the server crashed this morning around 5AM. I only included the lines that were within an hour of the crash. Memtest passed as well. 

     

    Note: I removed the trim plugin this morning as it isn't needed with BTRFS. It has been running for the last couple years without an issue.

     

    Jun 23 04:00:15 Vault root: /etc/libvirt: 920.7 MiB (965373952 bytes) trimmed on /dev/loop3
    Jun 23 04:00:15 Vault root: /var/lib/docker: 42.2 GiB (45270634496 bytes) trimmed on /dev/loop2
    Jun 23 04:00:15 Vault kernel: sd 7:0:1:0: [sdc] tag#5689 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    Jun 23 04:00:15 Vault kernel: sd 7:0:1:0: [sdc] tag#5689 Sense Key : 0x5 [current] 
    Jun 23 04:00:15 Vault kernel: sd 7:0:1:0: [sdc] tag#5689 ASC=0x21 ASCQ=0x0 
    Jun 23 04:00:15 Vault kernel: sd 7:0:1:0: [sdc] tag#5689 CDB: opcode=0x42 42 00 00 00 00 00 00 00 18 00
    Jun 23 04:00:15 Vault kernel: blk_update_request: critical target error, dev sdc, sector 973080532 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Jun 23 04:00:15 Vault kernel: BTRFS warning (device sdg1): failed to trim 1 device(s), last error -121
    Jun 23 04:00:15 Vault sSMTP[5384]: Creating SSL connection to host
    Jun 23 04:00:15 Vault sSMTP[5384]: SSL connection using ECDHE-RSA-AES256-GCM-SHA384
    Jun 23 04:00:17 Vault sSMTP[5384]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=461
    Jun 23 04:40:04 Vault root: Fix Common Problems Version 2021.05.03

     

    Attached is my diagnostics. unRAID version 6.9.2

     

    Thanks for any info you may be able to share!

    vault-diagnostics-20210623-1311.zip

    IMG_2503.jpeg

  4. On 7/29/2020 at 7:01 PM, jlficken said:

    Has anyone had any luck getting it to work when on it's on a different VLAN than the UnRAID server behind the LetsEncrypt NGINX reverse proxy?

    Yes, it works just fine for me. What issue are you running into?

    On 7/31/2020 at 7:19 AM, rojarrolla said:

    I have the same cameras as you, Interesting I found that they are made by Swann. Anyway, Which one is better SMTP or FTP?

    Thanks!

    I wouldn't really say one is better than the other, but for a faster response and quicker record trigger, use FTP.

    On 8/1/2020 at 5:37 PM, pepbill said:

     

    I can't figure out a step in the FTP instructions.  I can't seem to figure out where to run:

    cd /home/Shinobi

    node tools/modifyConfiguration.js addToConfig='{"dropInEventServer":true, "ftpServer":true}'

     

    Any help is appreciated!

    I was manually modifying the config via the /super config gui. There was an issue with the config getting reset back to default somehow, so I just created the conf.json file in Shinobi's app data folder and haven't had an issue since.

    On 8/2/2020 at 7:07 PM, Trites said:

    Every time I make a change to the docker it keeps resetting the conf.json file. I figured it would be stored in the appdata folder but it isn't. Any way to move the conf file?

    You can manually create the conf.json file. The container will use it instead. Just make sure it's the full config as it won't even look at the other one anymore.

    • Like 1
  5. On 6/7/2020 at 8:01 AM, repomanz said:

    any luck here? I'm buying a gpu soon to handle handbrake/plex and would love for it to also handle shinobi. If not, no worries I guess, my 2 cameras only use about 9 to 11% cpu (i7 6700k)

    For those of you seeing high CPU usage on Shinobi when using the built-in motion detection, most IP cameras have FTP or Email alerts built in along with motion detection. I'm using Reolink RLC-410-5MP cameras. Check out the following links to on how to use your camera's built in detection rather than Shinobi's. When I was using the motion detector in Shinobi, I was seeing anywhere between 40%-80% CPU usage on 8 cameras. When I switched over to using my camera's built in motion detection, I drop to around 7%-10% max with 8 cameras going. The FTP trigger will give you the fastest motion response.

     

    https://hub.shinobi.video/articles/view/Qdu39Dp8zDqWIA0

    https://hub.shinobi.video/articles/view/LyCI3yQsUTouSAJ

     

    image.thumb.png.2eb68fd874647c3ff1f719faf77d9607.png

    • Like 1
  6. Hello,

     

    I'm having an issue with VLANs on unRAID. On my network I have VLAN 3 setup. Anything I plug directly into the switch ports functions as it should with the VLAN. If I try and use the VLAN on unRAID, it won't work (no network connection). I tried with a VM as well and had the same result. Docker and VMs work just fine if I use br0.

     

    On unRAID I created the VLAN on my eth0 connection.

     

    VLAN Number: 3

    Connection: br0.3

    Protocol: IPv4

    Address Assignment: None

     

    Docker:

    IPv4 custom network on interface br0.3:

    Subnet: 192.168.3.0/24 Gateway: 192.168.3.1

    DHCP pool: not set

     

    If I configure a VM to use br0.3, it does not pull an IP from DHCP. There is no connection even if I assign one manually.

    Let me know if you need additional info. Any ideas as to what could be going on?

     

    Resolution: I was overthinking the issue. Recently I swapped out my gateway for a new one. During configuration, I tagged the eth0 port on the switch to use only LAN instead of All Networks. A switch of that setting resolved it. I'm going to leave this post so others also remember the more simple things.

    • Like 4
  7. 1 hour ago, Djoss said:

    Not sure if the extra directory is needed or not, but you can configure the automatic video converter to keep the same folder hierarchy as the input.  See the "Automatic Video Converter: Output Subdirectory" container setting.

    I did dig around this. When using same as src it would only do the most immediate folder, not multiple directories. Either way, it's working for me. 

  8. 20 hours ago, Djoss said:

    Can you use the Radarr's post processing script to copy the (already renamed) file to HandBrake's watch folder?

    This seems to be working so far. I had to create a custom script for Radarr and Sonarr separately. 

    1. Move or Episode is downloaded and imported
    2. Custom script copies file to the /watch directory
    3. File is encoded
    4. After encoding, file is moved to the /output directory then copied to overwrite the original file

    An issue I had was trying to make sure the files were dropped into the correct directory after encoding. Sonarr adds an extra directory that Radarr doesn't have. Wanting to use the container's available functions without adding new packages, I had to come up with a way to identify that extra directory. 

     

    Here's the variable I created and finally settled on:

    DEPTHCOUNT=$(find "$CONVERTED_FILE" -type f | grep -o / | wc -l)

    Seems pretty functional so far. I get to watch videos asap and they still get encoded. 

  9. 2 hours ago, Djoss said:

    You could do this by using post-conversion hook (https://github.com/jlesage/docker-handbrake#hooks).  Use a different/independent output folder and use the hook to move the file to its final location, overwriting the original one.

    I didn't think of that right off the bat. Next problem I would have is once Radarr imports the video file, it renames it. Any idea the best path to overwrite the renamed file? I know I would have to retrieve the new name, but I can't think of how... I think I'd have to create a post process script in Radarr to have it write the new name to a file then this processing script can read the file and overwrite it. 

  10. 39 minutes ago, Djoss said:

    Having the output and watch folders pointing to the same location is not a supported scenario.  You are better to set the output folder to where your existing files are.  Then put the new files in the watch folder.

    I've been using SpaceInvader's method of converting and importing, but I run into a couple issues. If conversion takes too long Radarr or Sonarr stop looking for the file and I have to import manually. Plus I have to wait a while to watch it unless I convert it later. 

     

    I was hoping to find a solution where I could allow Radarr or Sonarr to import the movie, copy to a temp directory then overwrite the original. In my case, transcoding a movie takes longer than watching one. 

     

    I will inquire on that post. Thank you. 

  11. Hello, a couple quick questions here:

    1. Is it possible to ignore already existing files in the /watch directory?
    2. Is it possible to grab a file, have it converted then put back in the same location as the source file? I received an error of "already exists" when trying this. My goal would be to just convert new files and just leave the existing ones.

    Thanks.

  12. Hi everyone,

     

    I just moved my unRAID setup into a Norco RPC-3216. I have a X10SL7-F MB flashed to IT mode. I am using 4 SFF-8087 to SATA cables. My cache drive is directly connected to a SATA port on the board. As this board only has 14 ports, I installed a PCIe SATA expander. No matter what I've done, I can't seem to get unRAID to find any drive aside from my cache. Any ideas or areas to start? Thanks in advance.

  13. I have this docker working perfectly. Quick question, as I've attempted all over per Gitea's docs. Where would I create the 'custom' folder to be able to modify the templates? I've tried it in the /data directory, /data/gitea/... I can't get it to accept any modifications via any custom folder. 

     

    Thanks.

  14. On 10/22/2018 at 7:12 AM, deusxanime said:

    Just switched over from regular CP to CPP/SB. Trying to resetup my backup and change a couple settings and the GUI keep crashing when trying to change the scan frequency. 

     

    Go to Settings (gear icon) > Backup Sets > "Frequency and Versions" Change... button > "Backup changes every" dropdown

     

    As soon as I hit that dropdown to change to something else besides 15 minutes, the GUI crashes. The only thing left up is the top menu bar. From there I can do File > Exit and it will restart the GUI and ask me to login, but as soon as I go back in and try to change that setting it crashes the GUI again.

     

    3 hours ago, Djoss said:

    So maybe to best thing to do is start over with a clean appdata and do the adoption process.  It should be quick and straight forward:

    • Stop the container.
    • Move the appdata folder: "mv /mnt/user/appdata/CrashPlanPRO /mnt/user/appdata/CrashPlanPRO.backup"
    • Start the container.

    I'm having the same issue. My log looks pretty identical with no errors. Only difference is I started from a clean appdata folder, mine was not a migration. Everything else works as it should except for that single option dropdown.

  15. 2 hours ago, fanningert said:

    @CrunchyToast Don't change the port in the app.ini. The right way is to change the port in the Web-Frontend of the docker container. There you can change the port of the SSH and HTTP to everythink you like.

    image.thumb.png.bea3a616aa988852351852739df624a3.png

    Thanks for your response. I did that, but it only wants to work on 3000 no matter what I do.

  16. Hello,

     

    Thanks for this Docker. I am having issues changing the port. I assigned this to grab an IP from my network (so I can assign a domain to it), however I cannot get it to change ports. Everything works fine if I leave it on port 3000, but if I modify app.ini and restart, it doesn't work. If I change it back to 3000, it works. I've even modified this in the Docker template in unRAID. Any idea what could be going on, or what I am missing?

×
×
  • Create New...