Jump to content

bobobeastie

Members
  • Posts

    132
  • Joined

  • Last visited

Posts posted by bobobeastie

  1. On 11/29/2020 at 9:21 AM, Rollingsound514 said:

    Ok, so despite PIA VPN client being set to allow LAN, it wasn't allowing connections to be made to network drives, go figure, so I turned it off, made the connection and then turned it back on again and the connection to the network drive seems to be maintained...

    Thank you so much, my pause updates for 7 days just ran out and my computer updated this morning, had a bunch of red X's on my shares, one wasn't red and it was trying to connect for a minute, saw your post, disabled PIA, and it connected immediately, along with the ones that were red.  I think it might have been a long time before I thought of disabling PIA.

  2. Just updated my PC to 20H2 and now I can't access my shares.  One unriad box is on a P2P 10gbe link, I thought that configuration was the issue, but my other unriad box which is connected normally through my router is also not connecting.  I don't see the registry setting found here: "Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" so maybe Windows isn't suing that anymore.

     

    EDIT: Bit of a false alarm, 1 restart fixed the normal 1gbe router shares, and a couple more restarts and reconfiguring the 10gbe settings in Windows fixed that.

     

    EDIT 2: Been dealing with the issue all morning, issue sometimes only happened with 10gbe share, I uninstalled 20H2, that didn't work, then I uninstalled KB4586781 which had been installed at the same time as 20H2, and now everything works!

  3. It's back up and running.  The  first xfs_repair resulted in this message:

    Quote

    Phase 1 - find and verify superblock...
    bad primary superblock - bad CRC in superblock !!!

    attempting to find secondary superblock...
    .found candidate secondary superblock...
    verified secondary superblock...
    writing modified primary superblock
    Phase 2 - using internal log
            - zero log...
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

    So I stopped the array, and started it not in maintenance mode, still showed unmountable, I stopped the array, and started it in maintenance mode, ran xfs_repair, after that I could start the array and the issue went away.  The resulting trash folder is basically empty, and all I had were torrents on the disk, so looks like I was lucky.  Thank you @itimpi and @Kevek79 .

  4. 2 hours ago, Kevek79 said:

    And as you have exchanged the failed drive, do not format or discard the old drive before your array is up and running again.

    Just in case anything goes south from here, you still have a copy of the data on that old disk! With a file system corruption of some kind, but still a viable source for data recovery just in case.

    Thanks, that's my plan.  After I get things working and I don't need the backup, I'm going to pleclear the old drive 3 times and if it passes replace the smallest drive in my array.  I really wish I had money and that the price of NORCO RPC-4224's hadn't nearly doubled, or I'd buy a 2nd one.  The fist one I have has been relatively rock solid.

  5. 5 minutes ago, itimpi said:

    That would work.   Whatever is showing before you start a rebuild is what will show after the rebuild.

     

    if you would prefer then it is perfectly acceptable to run the file system check/repair on the emulated drive before doing a rebuild.   If that does not fix the ‘unmountable’ file system then the rebuild will not help as the rebuild simply transfers what is on the emulated drive to the physical drive.

    Thanks, I don't think there was any mention of an issue with the fs before shutting down. I have array status and alerts email notifications enabled and don't have an email about the fs, not sure if I would expect to have seen one.  If there is a chance the rebuild will fix everything, or that after the rebuild, fixing the fs will solve everything, ie no 2nd rebuild is needed, then I think letting it run makes sense.  If I would need to do a 2nd rebuild then I tihnk stopping, fixing the fs, then doing a rebuild makes more sense.  Also I guess my preference is safer vs faster. 

  6. I had a drive disabled due to read errors, which I understand is usually a cable issue.  I stopped the array, shutdown, changed out the cable, which is CableCreation [2 Pack] Mini SAS 36Pin (SFF-8087) Male to 4 SATA 7Pin Female Cable, Mini SAS Host/Controller to 4 SATA Target/Backplane, 1M / 3.3FT, replaced it with another of the same, and replaced the drive, because I had a drive precleared and ready to go.

     

    On booting up, I selected the "new" drive in slot 3, and started the array.  Then I noticed that the new drive has "Unmountable: No file system" listed.  In the past, I have clicked on format, and lost data, which is why I am writing this, so I don't make the wrong guess and screw things up.  I think the data rebuild and unmountable file system are kind of 2 separate issues (probably same root cause), and the rebuild can continue, as in it is actually writing data to the drive, but that the fs on that drive will not work once done, because it doesn't work with the emulated contents currently.  I think I want to let it finish, then stop the array, start in maintenance mode, and check the fs?

    flags-diagnostics-20200923-0809.zip

  7. 24 minutes ago, JorgeB said:

    You have a flash drive problem, filesystem is read-only, so Unraid can't update super.dat with the new disk info, run chkdsk on the flash drive.

    Windows 10 basically didn't recognize the drive, was going to try to get a new flash backup, but now the box boots to the BIOS screen, so I am going to try with a flash backup from early yesterday.  Good thing the recycle bin plugin made me suspect the flash drive.  I'm now following the directions here: https://wiki.unraid.net/UnRAID_6/Changing_The_Flash_Device didn't realize it could take a local zip file of the flash backup, that's pretty cool.  I expect/hope that when I boot with it, recycle bin settings are visible and once the drive rebuild is done again, it will be done for good.

     

    edit: So far so good, recycle bin is visible and rebuild started.  Should probably older a couple of whatever a search finds to be most dependable flash drives for unraid are.  thank you @JorgeB

    • Like 1
  8. I have rebuilt a replacement drive twice, and it is asking me to do it for a third time.  The procedure I used was to stop my array, select no device for the drive I was going to upgrade, then power off, then I swapped drives, then booted and had a difficult time getting connected to the network, kept getting 169 IP.  Eventually I got that sorted out with the Ethernet cable, and it booted on the network, I chose the new replacement drive and the rebuild started.  At this point I noticed that the recycle bin plugin is not visible in the settings page, and that I can install it, but it never shows the gear icon, only the install button, in community applications. 

     

    The rebuild finishes, and I had another replacement drive, so I stop the array, and change a drive to no device, I think it was at this point I noticed that the first replacement drive was listed as no device, so that made 2 no device slots, which is no good with single parity.  I quickly changed both assignments to the correct drives and a rebuild started.  I made diagnostics during that rebuild, after it finished, and when I just rebooted a couple of minutes ago, and the same issue is present. 

     

    I noticed in the history under array operations, that the rebuilds are not being listed after they are done.  The last thing listed was my beginning of month parity check.  Could this be a bad USB flash drive issue?  I have heard you can check the drive for errors in windows, and that sometimes fixes issues.  I have the drive plugged in to an internal USB2 header.

     

    edit: Just noticed an email from 20 minutes ago:

    Event: USB flash drive failure
    Subject: Alert [FLAGS] - USB drive is not read-write
    Description: Cruzer_Glide_3.0 (sda)
    Importance: alert

     

    going to try "Put flash in your PC and let it checkdisk." from @trurl

    flags-diagnostics-20200908-0722.zip flags-diagnostics-20200908-0715.zip flags-diagnostics-20200907-1408.zip

  9. 1 hour ago, ElectricBrainUK said:

    Yeah youre right the second request for username and password is probably redundant I will remove it in a future update. But root and your root password is correct yes. 

     

    However it is unusual that the default are back in the unraid api docker config after editing them, they should persist as the new username and password and Im guessing that is why it is getting unauthenticated on your MQTT broker. Sounds like a problem with the unraid server itself, do other changes to any of your docker file configs persist? If not it might be worth restarting or updating your server. 

    I did a bunch of stuff, including uninstalling unraid-api and MQTT, had a response here half writtten for half of today, after doing some other home assistant stuff I got back to this.  I don't know if it was my issue before, but where you have Container Variable: MQTTBroker and the default is hassio (I think), I just figured out that that was supposed to be the IP address, in this case of my unraid box.  If you are open to suggestions from people who don't know anything about MQTT, I would put "broker_hostname-or-ip", or something to that effect, instead of broker, as the default  I forgot I had changed that to the IP, went in to home assistant a little later and MQTT had a bunch of entities.  


    Thank you very much for your help! I see that there is a entity for switch.server_name_power_off, I'm hoping/guessing that this is a graceful shutdown?  I also want to have buttons to pause docker containers, I see that I can turn them on or off, but pause was in the webui for unraid-api, so I probably just have to change something to enable pausing?  Maybe I'm better off stopping and starting, but still want to learn.  Thank you.

  10. 2 hours ago, ElectricBrainUK said:

    Ah right, the username and password should be the ones you set up when you configured the mqtt broker and you'll also need to add those to the mqtt config you have in your home assistant here:

    Unless of course your broker doesn't have a username and password in which case you should leave them blank in your unraid API config as well. 

     

    It sounds like your server is properly set up in the unraid API if you can see and control it from there. However you should only connect the unraid server itself there, not to the home assistant docker which is why you're seeing the following in the logs:

    You can use the manual config section on the left to delete that entry and ensure the unraid server UI is the only listing. 

     

    Otherwise it all sounds good, so hopefully we will get to the bottom of your issue soon! 

    Thank you, I tried deleting the default username and password from unraid api, because I wasn't using any in the MQTT container, I didn't know that's how it worked.  After clearing out the username and password, if I go back to edit the container the defaults are back.  So I tried setting up a real username and password, which I also placed in a passwords.txt file in the MQTT appdata folder.  That file disappears when I restart the container so I believe it worked. 

     

    I deleted 2 prior configs in unraid api using manual config on the left, then re-set it up, and this brings me back to something that confused me.  The box called "Setup Unraid Server", I fill it out with the IP of unraid, login=root, and my root password, but then there's a popup asking for username and password.  That seems redundant, am I supposed to put something else in one of those places, because I have ben using root and my root password?

     

    I updated configuration.yaml to have this for MQTT:

    mqtt:
      broker: 192.168.1.10
      port: 1883
      discovery: true
      discovery_prefix: homeassistant
      username: username
      password: password

     

    Username and password have been changed by me, they are the ones used in unraid api and mqtt... and when I restart home assistant I get this in the log:

     

    2020-08-02 08:35:00 ERROR (Thread-4) [homeassistant.components.mqtt] Unable to connect to the MQTT broker: Connection Refused: not authorised.

    2020-08-02 08:35:00 WARNING (Thread-4) [homeassistant.components.mqtt] Disconnected from MQTT server 192.168.1.10:1883 (5)

     

    I used the updated username and password in MQTT Explorer and I believe it is working as a see that there are messages, I can't figure out what is in them though.

  11. I left your setting as default, so base topic is homeassistant, I then set up MQTT with Broker equal to my unraid server IP address, where all this lives, the default port, and your default username and password.   I do see good information ion unraid API.  I see a bunch of messages like this in it's log:

     

     

    Get Main Details for ip: http://192.168.1.10:8123/ Failed
    Request failed with status code 500

     

    I see sent and received messages in the log for mqtt.  In home assistant I can listen to homeassistant in the mqtt config, and then publish on homeassistant and it logs that.  Not sure if either of those means anything.  Thank you very much for your response, I appreciate it.

  12. I'm pretty confused, just playing with home assistant today for the first time, using the container homeassistant/home-assistant.  Trying to get this to work, I installed electricbrainuk/unraidapi, installed spants/mqtt and I think both are working?  I installed MQTT-Explorer in windows and it saw messages.  Every time I go to the web gui it makes me fill out username and password in a popup, and the username/password on the left isn't persistent.  

     

    The instructions mention this:

    Add the following custom repository: https://github.com/ElectricBrainUK/HA-Addons

    Build the Addon

    Fill in the config section

    Start

     

    I don't understand what this means.  I can see docker and unraid server details in the unraid-api after I fill out the info in the popup, but I am not getting any discovered devices in MQTT in home assistant.  I have the following in configuration.yaml:

     

    mqtt:
      broker: 192.168.1.10
      discovery: true
      discovery_prefix: homeassistant
     

  13. 20 hours ago, trurl said:

    How is it supposed to "look" at the "current file size"? What does that even mean when the file doesn't exist on Unraid yet?

    That's a good point, I guess I was incorrectly thinking of files moving in to unriad, but these are moving withing unraid internally, or only semi internal, because they are mostly files created in docker containers.  If rtorrent says it wants to create a file, I guess the size on disk starts at 0kb, but I think the other size is the eventual whole, does unraid know those values?  I thought if it was bigger than available it would choose to write to the array vs cache.  If there is a wiki that explains all of this I'd be happy to study it.  Thank you.

  14. Did some googling and got it fixed.  In case anyone else has the same issue, my database was corrupt, probably because it wasn't able to get written to.  First I stopped plex, and moved the file labeled 1 in the below image to a backup folder that I created.  Then plex was able to start , but it was a fresh install, I thought maybe it would grab a backup, no luck, but at least plex would start.  Then I stopped plex again, deleted the tiny version of file #1 that had just been created, copied file number 2, because it was the most recent date, to the backup folder, then renamed the original copy to not have the date.  That worked and now plex has only forgotten the last 3 days.  Too bad plex will only create a copy every 3 days.  I was waiting on winrar opening CA_Backup.tar, and it looks like I have a newer version in there, so that's even better.  Thanks for the help @trurl !

     

    image.thumb.png.a7617431c7359f52848d8a2e44da725d.png

  15. 3 hours ago, trurl said:

    You have a lot of shares, all with no Minimum Free, and a lot of mostly full disks. This is likely to cause issues with overfilling disks.

     

    The general recommendation is to set Minimum Free for a share to larger than the largest file you expect to write to the share. Unraid has no way to know how large a file will become when it chooses a disk to write. If a disk has less than Minimum, it will choose another. Cache also has a Minimum setting in Global Share Settings.

     

    Other than that, it isn't clear what the specific issue is. You might try deleting and recreating docker image in Settings - Docker. Then the Previous Apps feature on the Apps page will reinstall your dockers just as they were.

    I was aware of the fact that unraid would try to write files that were too large for the destination but never looked for a setting to prevent it or did any research.  I guess I don't understand why unraid doesn't just look at the current file size and determine that it won't fit, and then write it to the array?  I have put some values in for this on my shares the write to cache and hopefully it will help prevent this issue in the future.

     

     First I tried removing the plex docker, no dice, then I stopped docker in settings and removed the image, added plex back alone and still no dice, same error message, I haven't seen any issues before or after with any of my other containers, they all added back fine.  I'm attaching a fresh diagnostics from after re-setting up docker, just in case.

    nastheripper-diagnostics-20200523-1726.zip

  16. Woke up to plex not working and the log is full of "Starting Plex Media Server".  I have tried restarting the docker and rebooted unraid, neither worked.  My cache drive became 100% full at some point last night I think, this might be related.  Usually when that causes docker issues a reboot solves it.

     

    User uid: 99
    User gid: 100
    -------------------------------------

    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 40-chown-files: executing...
    [cont-init.d] 40-chown-files: exited 0.
    [cont-init.d] 45-plex-claim: executing...
    [cont-init.d] 45-plex-claim: exited 0.
    [cont-init.d] 50-gid-video: executing...
    [cont-init.d] 50-gid-video: exited 0.
    [cont-init.d] 60-plex-update: executing...
    No update required
    [cont-init.d] 60-plex-update: exited 0.
    [cont-init.d] 99-custom-scripts: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-scripts: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    Starting Plex Media Server.
    [services.d] done.
    No update required
    [cont-init.d] 60-plex-update: exited 0.
    [cont-init.d] 99-custom-scripts: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-scripts: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    Starting Plex Media Server.
    [services.d] done.
    Starting Plex Media Server.

     

    After that the last message just repeats.

  17. 3 minutes ago, mdrodge said:

    so i need to add the preset  like this "  Custom/General/PresetName  " ?

    My preset is named Custom1 and in the docker config in unraid I have "General/Custom1", so it looks like you can just remove the first part, "Custom/".  When I said I wasn't getting the same error that was referring to the message from @08deanr which @Djoss said didn't matter.  The error I got was just that the conversion failed, nothing about presets.

    • Thanks 1
  18. I was having an issue where the update caused my watch folder to stop being processed, I was getting different errors, but changing the preset I use to have "General/" before it fixed it.  

     

    Question, all of the files that failed to convert because of this issue will now not process, I know changing their names will make them appear to be new, is there a better solution where I don't have to rename them?  Some file I can delete that logs failures?

     

    edit: Looks like failed_conversions in the base of the appdata handbrake folder does it (allows files to be re-tried), still waiting for handbrake to go through all of the files that were already processed, because I deleted my appdata earlier today and therefore it doesn't have a list of those in successful_conversions.

  19. 17 minutes ago, kpc180 said:

    So, I'm not sure if I have done something (I don't recall changing anything)  but I'm not seeing the Strict Port Forwarding Option in my docker settings?  So I'm only able to open the webgui if I disable the vpn which is not what I really want to do.  What am I missing?

    Don't know, do a CTRL-F on "Container Variable: STRICT_PORT_FORWARD". If it isn't there I have no idea.

  20. @guru69I have the same chassis and cooler and replaced the stock fans with some kid of Noctuas, which you did as well.  I don't see that you mentioned the motherboard temp, is it similar to mine?  If not what do your settings look like?  I'm assuming a spot on the motherboard actually being 95.8C is pretty unlikely, or if it was headed in that direction it would shut down before getting there. I suppose rebooting and checking in the bios would help, but I'm in the middle of a parity check, which I'm doing after a parity rebuild, because my 2 parity drvies decided to become disabled.

     

    I'm not so worried about the CPU because of the 27C Tctl offset, and it looks like Tdie or CPU temp are without the offset, and they seem fine.  My CPU temp is reading 68.7, and I am running a handbrake docker that's running full tilt, not pinned to any core.  My best guess that the motherboard temp of 95.8 is also getting an offset for some reason, because that brings it back down to the same range as the CPU.

  21. Temperature question for anyone with this motherboard, my motherboard is reporting at 95.8C, which seems unlikely to be accurate.  I know the CPU temp has a 27C offset but haven't seen anything about the motherboard.  I ran sensors-detect and added this to my go file:

     

    # modprobe for each sensor
    modprobe k10temp
    modprobe it87

     

    Is there a fix for this?  Is 95.8 accurate? Not accurate but have to live with it?  In dynamix system temp, "it87 k10temp" is in Available Drivers, and the mb temp is labeled "k10temp - MB Temp.  k10temp CPU die and tdi are in the high 60's, I think it's clear that these are CPU sensors.  Tctl is in the mid 90's, which is just the annoying 27C offset.  So there aren't any obvious alternatives. 

×
×
  • Create New...