thespooler

Members
  • Posts

    58
  • Joined

  • Last visited

Posts posted by thespooler

  1. I just installed my first docker with this, and while trying to edit stack - > compose file,  after hitting save, I've lost all the UI buttons at the bottom and the UI is now inserted into the edit description area.  It seems rather permanent, leaving the Docker page and coming back does not fix it.

     

    UPDATE:  I was able to Edit the description to just be My Description and brought everything back.  I swear it had HTML in that field to begin with as I thought that was odd and I guess I removed the textarea closing tag.  Raw html in a field is usually a security no-no for this very reason of modifying beyond the intent.

     

    IMGUR

    compose.png

  2. On 2/16/2023 at 4:24 PM, Alex.b said:

    Hello,

    I'm trying to use SQLITE with my Wiki.js instance.

     

    I managed to get it to run by only providing a filename in the DB_FILEPATH variable.  Knowing the database wouldn't survive an update, I just wanted to take it for spin.  The docker user is not 'nobody', so there's going to be permission issues with appdata.  Best solution for me was to just switch to the linuxserver.io container and everything worked flawlessly.

  3. On 4/2/2021 at 11:45 AM, CmdrKeen said:

     

    I'm trying to install BigSur (not Catalina), however it is behaving the exact same way where VNC connection shows a black screen. hitting enter shows the apple logo but nothing else happens and i can't get past this screen.

     

    My post is only one page back...  Give it a try.

     

     

  4. "Do not overlap with autoupdate"

     

    What does this line mean?  The plugin is called Auto Update, the tabs say Auto Update, so what does autoupdate refer to? Is there some unraid setting I'm not aware of?  If this is suggesting don't configure Plugin and Docker auto update frequency at the same time, does that mean they both can't run daily for example?  Or in fact, they can both be set to Daily, just at different times, in which case, wouldn't the message be more informative if displayed under the scheduled time?  Perhaps "Do no overlap with plugin schedule" would be more clear?

  5. PROBLEMS:

    Black screen

    Stuck on Apple logo

    Can't reach recovery servers / no network

     

    I followed the video EXACTLY and I hit all of these issues one after another.  I went back 10 pages and see these issues showing up repeatedly.

     

    The solution to all 3 of these, is to skip ahead in the video to the 14 minute mark and add your server name to the script.  Run the script a second time and now you jump back to the 8:50 mark and install for the first time.  I don't know why this issue is plaguing so many of us first time installers.

     

    Of course, you'll still end up with Catalina until that's fixed.  I was going to update the ID for Big Sur and start over, but decided that xcode 12 still runs under Catalina, so I'll try that first.  I'm just curious how responsive it is compared to my older Macbook Pro.  But man, for a walk through video this has now taken me 6 hours from start to where I am now which is just installing the OS.  The walkthrough video needs a walkthrough video.

     

     

     

  6. This was a surprisingly easy Docker to set up!  And to think I avoided it for so long...

     

    Incase anyone else is using Caddy v2 for their reverse proxy, or I suffer a huge failure one day:

     

    Modify /config/config.php as follows (for completeness, I know this is repeated over and over in this forum):

      'trusted_domains' => 
      array (
        0 => 'unraid_IP:unraid_PORT',
    	1 => 'cloud.your_Domain.your_TLD',
      ),
      'trusted_proxies' => array('unraid_IP'),
      'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'),
      'overwritehost' => 'cloud.your_Domain.your_TLD',
      'overwriteprotocol' => 'https',
      'overwrite.cli.url' => 'https://cloud.your_Domain.your_TLD',

    And in your caddyfile:

    cloud.your_Domain.your_TLD {
      reverse_proxy https://unraid_IP:unraid_PORT {
        transport http {
          tls_insecure_skip_verify
        }
      }	
      encode gzip
      tls [email protected] {
      }  
      header {
        # don't advertise "Caddy" as server
        -Server
        # docker ngix server contains all the necessary security headers already - instant A+ at https://scan.nextcloud.com/
        Strict-Transport-Security "max-age=31536000; includeSubDomains;"
      } 	
    }

     

    1. I gave it a shot, but it doesn't seem to support .HEIC files out of the box anyways.  Would VLC help with that?  I'm aware of the patent issues, but this is the iPhone default is it not?  I've got enough wasted space with JPEG/RAW pairs without enabling compatibility mode for Apple to create JPG/HEIC pairs as well.
    2. The breadcrumb hierarchy was driving me crazy.  If I have pictures from the last 20 years and I look by "When" I have to scroll ALOT to hit 2001 way down at the bottom of the page.  I expect to click on the year and select from all the possible years immediately.   I shouldn't have to scroll to see my kids birth pictures when I can jump right to his birth year.
    3. Although I selected Auto Org to see what this paid feature might mean for me, I had hoped my original directory structure would be keyword tagged to the files, but it wasn't.  The pictures in the 'cuba' directory could have been tagged 'cuba' as a keyword.  They're pre-GPS so they aren't tagged by location. Obviously ignore "DCIM*" type numbered directories, but directories with alpha characters are likely meaningful.  
    4. I don't care about Camera or Lens as categories, so I would want settings to turn them off.
  7. On 6/4/2020 at 10:21 AM, dubbly said:

    It isn’t possible to play with with the Switch or Xbox One remotely to a custom server nativity.  There is a beta Docker (I forget the name) to trick an Xbox to do This. Not sure if it will work for a switch.

    If you're referring to Phantom, it specifically says Switch is not supported.  

     

    However, I've had some success getting Switch players into the server without the Switch users having to do anything.  The only clients I'm concerned about are iOS and Switch.  If my son launches a world on his iOS device, his iPad.  His Switch friends try to join that world, but because the external IP of his iPad and the docker server are the same, they actually end up on the docker server instead.  So once all of his friends are in the docker server, he closes his game and joins them on the docker server which he uses the local IP to connect to.  For his iPad, his "Add Server" used the domain name, which I resolved over local DNS to a local IP, but externally would resolve to an external IP.  I had hoped this would allow invites to be sent with a domain name successfully, but I'm not sure if it even maters as the Invite is never received on the Switch anyways.  

     

    It's crazy that Fortnite just works on every client, and Minecraft is so perplexing.  Just getting all the permissions right for online play was a nightmare.

     

    The one thing I have noticed however is that the server has some issues remaining green to clients.  After a restart, it's definitely green, but as the uptime ticks onwards the server ends up red and only a restart will get it green again.  I've read the whole thread multiple times, so I know I'll hear it works on PE, etc.  Just stating my experience...

     

     

  8. Is anyone using Minecraft for Switch and connecting to this?  We're playing on iOS, but we're trying to figure out how to get a young kid on Switch to join.  The ports are open from the guide a few posts back.  We can "Invite" him when we play the server, but he's not seeing it or any option to "Add Server" like we do.  Permissions are probably okay as they can play together on the featured servers.

  9. I'm having a problem that has only appeared since I've updated my PCs to Windows 10 1903.  I can no longer drag and drop files onto network shares (files with the green pipe coming out of them) or even paste on to them.  I have to open the network share first so there is a folder view, then I can drop or paste on the white space to add them to the root of that network share.  This is super unproductive.  It's likely not unRaids fault, but I also find it hard to see why MS would do this on purpose.  Hoping there is some tweak to restore this ability between the server or the clients.

     

    I can drop files on to the network share directly in the bread crumb bar but technically that represents the open folder so I guess it's treated differently.

     

    Annotation 2019-07-03 150729.png

  10. I'm running the latest Jellyfin docker from here.  CPU is pegged at 100%, memory is instantly eaten up, and the Jellyfin GUI doesn't come up, though it tries.  Unraid is still responsive.  From what I can tell, Jellyfin is indexing and it's running hundreds of /usr/bin/ffprobe -i commands until all my memory is gone, then it sits there.  After around 5 minutes, dotnet crashes with an our of memory error according to the Unraid log and the whole process starts again.  It's hard to get a sense of what's happening as I'm not much of a Linux guy and the delay in doing everything is so great.  I can stop the docker, but starting it back up just results in the same scenario.

     

    I'm seeing all the ffprobes when I run top or ps when I ssh into Unraid itself.  I don't quite understand why the ffprobes just sit there.  They don't seem to be doing anything, not even erroring out.  If I manually look in the /usr/bin unRaid directory, I don't see ffprobe, but I would have assumed this was running from the docker container anyways.  I see someone else has complained about ffprobe missing, but in my scenario, how can a process being showing in top if it's truly missing.  Not sure why Jellyfin would launch more than a few ffprobes at a time anyways.  

     

    At this point, I'm just waiting things out to see what happens.  I'm at the letter T in the indexing after 22 hours, but thought I'd pop in here and post this.  

     

    Just tried to copy the top output to the clipboard and Kitty crashed...  It did manage to copy though.  There are more than 900 ffprobes listed in top.

     

  11. On 8/21/2013 at 8:59 AM, Fireball3 said:

    Update on 17.04.2017, v4 <--- this is the latest, use this one!

    Firmware is still P20.00.07.00

    Corrections for EFI environment. Untested due to missing hardware.

    Post your experience in the forum.

    https://www.mediafire.com/?py9c1w5u56xytw2

     

     

    If you're still interested in maintaining this, there are some corrections you need to do.

     

    For the thread:

    I also got the "DOS/16M Error: [40]  not enough available extended memory (XMIN)" error.  Pulling a stick did nothing, that still left me with 4GB in the PC.  I couldn't get around this no matter what I tried.  I ultimately had to replaced DOS4GW.EXE with DOS/32A a more recent DOS Extender which didn't have a problem with what I guess is TOO MUCH ram.   I just dumped everything from the extract binw directory into the root of my USB drive and renamed DOS32A.EXE to DOS4GW.EXE and the DOS parts worked.

     

    The EFI shell scripts have a few problems, mostly cosmetic:

     

    "@echo is off" should be "@echo -off"

    "cd..\xxxxxxxxx" should be "cd ..\xxxxxxxxx" <- you need the space after "cd" or it fails.

    "echo . sas2flash.efi -l blah blah" <- this confused "echo", it thinks the "-l" is meant for it, which is invalid.  Perhaps wrapping the echo with quotes would make that work.

     

    Since you were unsure of EFI, I was following along with another blog to verify the commands, and the other blog had a reboot between 5.2 (P7) and 5.3 (P20).  Ultimately I didn't reboot.

     

    Step 6 looked sketchy since it wants you to edit it, but doesn't say that until you run it.  Maybe REM the sas2flash.efi line so they have to add the SAS Address and remove the REM.

     

    In any case, your part was easy, thanks for doing this, maybe 10 minutes to flash between DOS and EFI Shell.  But getting around the DOS4GW errors took hours.

     

    Out of curiosity, I've only hooked up my cache drive to the controller since mine was eBayed and I'll stay like that until the controller earns my confidence.  But has anyone every encountered issues with moving drives between controllers? Should I turn parity correction off?

    • Upvote 1
  12. 10 hours ago, dopray said:

    While setting up a Docker for Appdaemon I'm getting the exact same error as @wyleekiot:

    dash_net = url.netloc.split(":")

    TypeError: a bytes-like object is required, not 'str'

     

    No dashboards or config set, just trying to connect to HA. Anyone have an idea what might fix this?

    Inside the configuration.yaml file, your "dash_url" is blank.  Makes you wonder what other trivial errors you will see if it can't handle that happening.

     

    Even if you remove the dash_url, it will generate a different error.  Remove the entire "hadashboard" section so the dashboard will be disabled, and the Docker should start up.

     

    #hadashboard:
    #  dash_url:
    

     

    • Like 1
  13. 21 hours ago, RobJ said:

    The diagnostics you posted only showed the drop of the boot drive, a USB connection.  No other drive was dropped.  None of the other drives were assigned because there was no super.dat available, so they would have appeared as unassigned drives. 

     

     

    Ah, interesting!  Previously, when the flash dropped, all the other drives stayed assigned.  But I guess that's the difference between having an array started, and then the flash dropping vs the flash dropping before the array is started.

     

    I've rebuilt drive 2 and so far so good.  I didn't realize dropping of drives was an issue.  I would have saved myself $500 bucks in new hardware had I known.  I just assumed it was a definite hard drive failure.

     

    Thanks for your assistance.

  14. I've disconnected and reconnected the cables and moved the flash drive to a USB2 port.  I also plugged in my spare precleared drive to a PCIE SATA card I wasn't using just to see if it persists if the others drop again.  

     

    unRAID has booted, everything assigned correctly without me doing anything, but array is stopped and drive 2 is still red x'd.  Since drives are dropping, I'm not sure how concerned I should be with the array.  How do I get disk 2 back?  Remove it, start array, stop array and reassign the original disk 2 back to disk 2?  Will that cause a rebuild of that drive?  I'm a little nervous with what happens if the drive or other drives start dropping during that process.

  15. On 3/8/2017 at 4:17 AM, superderpbro said:

    Is there a way to see the status of a preclear started by the plugin VIA SSH?

     

     

    Right above your post is mine having a similar problem, but there is a file created  "/tmp/preclear_stat_sd?" that you can look at.  The file not changing, or the time stamp might help you understand if it's still functioning.

  16. 23 minutes ago, RobJ said:

    Once you lost the initial access to the boot drive, and therefore lost /boot, there was no way to save any reconfiguration or drive assignments.  Drive assignments are in super.dat in the config folder of the boot drive.  /boot is tied at boot time to a USB device with a FAT file system containing a volume label of UNRAID, in your case /dev/sda.

     

    Before the array was started, the drive was dropped, and all assignments lost at that time.  It was quickly found again, assigned to /dev/sdh, but not seen any longer at /boot.

    3

     

    Sorry for the confusion.  These are two unrelated events.  

     

    Pre reboot, I remounted /boot to sdh1 in case I modified the drives.  The mounting worked, the errors went away, but I ultimately didn't do anything since the new SATA device wasn't detected. 

     

    After reboot, everything was detected and in its place.   All drives were assigned, disk 2 was still red x'ed, but now SMART was working.  Later, if you look at the system log, when the USB was dropped, so were all the drives.  That's when everything showed up under Unassigned Devices.    In the original scenario, only the flash and disk 2 dropped.  This time after adding a new drive to the system, they all did.

     

    I think the configuration is good, but I haven't rebooted yet.  Going to redo the cables first.   

  17. Corsair CX430M.  This is an i3 setup.  Nothing major.

     

    The server ran in its original state for many months, with I think 2 flash drives needing to be replaced during the a time.  Adding this new drive today was the only other physical change.

  18. Thanks for your reply.  I've been through a lot since was originally posted this in an effort to resolve it.  In any case, I have my replacement drive.  It's precleared.  I popped it in hoping it would be picked up through hot plug-in support, but it wasn't.  To sum up where I was, the /boot drive had dropped, and disk 2 was being emulated and listed a few write errors, but the array itself was still serving files, dockers were running.  I ultimately mounted /boot on /sdh1 which was where the flash drive had shown up after checking it out on a Win10 PC.  I was hoping that would allow me to save the config and rebuilt, but since the hot plug in didn't work it didn't matter.

     

    I restarted today and the /boot drive was back.  Previously drive 2 couldn't show SMART info while it was being emulated.  I thought this was normal, but I see now when unRAID came back up, the array was not started but SMART tests were available.  So I started to wonder if disk 2 dropped in the same way the /boot drive had.  Maybe there wasn't anything wrong with it at all.  I left it running a SMART extended test and it seemed stuck at 10% for a long time.  I clicked Main, and was a bit shocked to find all the drives now sitting in Unassigned Devices.  

     

    I'm just using the motherboard SATA connections.  

     

    I didn't previously post the diagnostics because there wasn't much in them once /boot dropped days earlier.  This one catches the drop at

     

    Mar 10 15:55:41 Tera kernel: usb 4-4: USB disconnect, device number 2.

     

    I'll remove and reseat all the cables next. 

     

    tera-diagnostics-20170310-1602.zip

  19. 9 hours ago, thespooler said:

    I'm trying to do a preclear in a VirtualBox VM on Windows.  I have a trial unRAID, with no config or key, unRAID has started.  I have installed the Preclear plugin.  The HD is passed through and recognized with VirtualBox's RAW mode.  Preclear is running, but there is no SMART info available.  Am I just wasting my time with this?  Or will Preclear still be able to tell me if the drive is okay through actual drive errors it might encounter?  I can always check the SMART stats once it's hooked up natively or through Windows.  The drive is a NewEgg HGST refurb (supposedly EOL from a data center) and it passed all the SMART tests with HGST's WinDFT.

     
     

     

    Hmm, well it doesn't appear to have worked.  Preclear is hung at 98% on the first of three cycles' preread stage using the built in preclear script.  How should I recover from that? The "/tmp/preclear_stat_sd?" file has a time stamp of 3 hours ago (the preclear itself reports 6 hours have passed, but I started with a reboot and my system uptime is 10 hours.  Top shows preclear with 99% usage though the uptime is only 187:33.  Is that the missing 3 hours?  

     

    Perhaps the preread did complete and the next phase is where it hung.

     

    Pre-Read (1 of 3): 98% @ 126 MB/s (6:30:25)

  20. I'm trying to do a preclear in a VirtualBox VM on Windows.  I have a trial unRAID, with no config or key, unRAID has started.  I have installed the Preclear plugin.  The HD is passed through and recognized with VirtualBox's RAW mode.  Preclear is running, but there is no SMART info available.  Am I just wasting my time with this?  Or will Preclear still be able to tell me if the drive is okay through actual drive errors it might encounter?  I can always check the SMART stats once it's hooked up natively or through Windows.  The drive is a NewEgg HGST refurb (supposedly EOL from a data center) and it passed all the SMART tests with HGST's WinDFT.

  21. So I rebooted to finish installing 6.3.1 3 days ago.  Did some Docker updates before the reboot.

     

    Today, I logged into the web GUI  to update a Docker and it told me there was a read error on a drive.  Sure enough I see one of my hard drives has an X on it, and 3 read errors.  Okay.  I acknowledge the notifications, but the GUI is clearly struggling.  Every page wants me to install the Preclear Tor crap which is annoying.  The plugin page isn't even showing Preclear to uninstall the thing.  It only shows unRAID and the web GUI installed.  The main tab isn't doing anything.  There are no Dockers, or Docker tab.  Checking the syslog makes the browser unresponsive, but I can see this in gray.

     

    Feb 14 04:40:01 Tera liblogging-stdlog:  [origin software=rsyslogd" swVersion="8.23.0" x-pid="1426" x-info="http://www.rsyslog.com] rsyslogd was HUPed

    Feb 14 04:40:01 Tera root:

    Feb 14 04:40:01 Tera root: Warning: file_put_contents(/boot/config/docker.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerconfig.php on line 35

    Feb 14 04:40:01 Tera kernel: fat__get_entry: 182 callbacks suppressed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8192) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8193) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8194) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8195) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8196) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8197) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8198) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8199) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8200) failed

    Feb 14 04:40:01 Tera kernel: FAT-fs (sda1): Directory bread(block 8201) failed

    Feb 14 04:40:02 Tera root:

    Feb 14 04:40:02 Tera root: Warning: file_put_contents(/boot/config/domain.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/libvirtconfig.php on line 39

     

    I'm assuming SDA1 is my USB flash drive, as the /boot directory is empty in MC.  How did the system even boot?!  It must have died after it booted?!  If so this would be the 3rd USB flash drive I will have burned through since I started using unRAID last year.  These issues are always discovered after an upgrade, probably because it gets rebooted so infrequently.  I plugged the USB flash drive into my Windows machine and it lights up and seems fine.  I was able to copy all the files, Windows sees nothing wrong with the FAT.  Windows can open /config/docker.cfg no problem.  But plugging it back into my unRAID box and it doesn't light up in any USB port.  I'm reluctant to reboot as my files are being served and my dockers actually are running right now. 

     

    One thing I did think was strange with 6.3.1 was after it booted, I could see my USB flash drive listed under Boot Drive as sda (which is confirming my suspicion that it is my boot drive that is no longer readable), but also listed under Unassigned Devices as sdg.

     

    Boot Device

    Device Identification Temp. Reads Writes Errors FS Size Used Free View

    Flash USB_Flash_Drive - 16.0 GB (sda) * 443,036 411,842 0 vfat 16.0 GB

    304 MB

    15.7 GB

     

    Unassigned Devices

    Device Identification Temp FS Size Open files Used Free Auto mount Share Script Log Script

    sdg   USB_Flash_Drive_AA437O23EKVFSJJYFSVA

    Mount * vfat 16.0 GB - - -

     

    The main screen seems to cache the Boot Device, as even if I unplug it, it's always there.  System Devices shows it missing and when I plug it in, it appears as sdg. I tried to mount it with Unassigned Devices to see if the files were there now, but it just refreshes and shows the Mount option again, never getting past it.