RadOD

Members
  • Posts

    36
  • Joined

  • Last visited

Posts posted by RadOD

  1. The server BIOS still knows to boot to the USB drive.

     

    Before starting, I tried backing up everything. The bzfirmware file kept giving errors when trying to read the file.  Multiple attempts to run chkdsk however did not find any errors but I suspect the USB drive is going bad.

     

    I copied all the bz* files over - the 6.12 bz files would not boot but the 6.11 files worked and I got things started up.  Before attempting to backup the flash drive from within the GUI, I updated the OS to I think 6.12.6 thinking that was a good idea.  Later I came back to do the backup and there was an upgrade to 6.12.8 or something.  After running that the USB drive was no longer detected even after multiple reboots with drive drive in every port.

     

    Is this what I want to do next: https://docs.unraid.net/unraid-os/manual/changing-the-flash-device/ ?

  2. My Unraid server has been in storage for over a year.  Now I get "This is not a bootable disk." when booting to the original USB drive.  It boots if I insert another USB drive on which I had just installed a new clean copy of Unraid.   

     

    How do I repair the original USB drive?  Or can I copy my key and config to the new USB?  The new USB has the current version of Unraid -- is that going to cause a problem with the Unraid version?  I don't know what version I was running, but obviously it is not current.

  3. I have not used my Unraid server for over a year due to illness and it has endured a couple moves.  I am trying to get it up and running again but I cannot make a network connection to either the MB NIC or PCIE card NIC (Intel x540).  I deleted the network.cfg and rebooted but nothing happens - no /config/network.cfg file is autogenerated.  

     

    Is there a way to get it to create the file?  Is there a log file somewhere that will tell me if there is some further system corruption causing the problem? Is there a way to repair or create a new USB drive without loss of data?

  4. Rarely something causes the Unraid GUI to freeze up and the only way to fix (clean shutdown and reboot) is from the console.  It is kinda important when I need it but rare enough that I don't want to hook up its own monitor and keyboard.  Conveniently I have a workstation close by so I have set up a cheap HDMI switch.  The console displays correctly for a while but eventually goes black and the monitor gives a "1920x1080 resolution is recommended" notification.  This sounds like the HDMI switch is screwing up the EDID somehow.  Is there a way to add a boot time config setting or run a script to manually override the resolution? Is there an xranr command in Unraid somewhere? I don't see anything I recognize in the nerd tools and dev pack plug ins.

  5. I plan to replace one at a time, rebuilding the first before replacing the second. I just want to be certain that this intermediate step is not going to result in something like the 1st new larger disk getting a smaller partition to match the older drive, and then after I add the second larger parity drive which matches the first, and in the end I have two larger drive not being fully used.  Or does unraid do all this automatically?

  6. After a kernel panic and crash which I suspect was related somehow to assigning custom IP addresses on BR0 similar what is described here...

     

    Many dockers have nothing under the "port mappings" column even though they used to.  I cannot access any docker at all via http://IP:port even those that still list the IP:port mappings.  I am left with two 'versons' of BR:  

    br-9c2cde536e88 and br-03ece3ee359d that I didn't create and cannot delete.

     

     

    At this point, I am over my head and totally confused.  

     

    Is there an easy way to wipe all the docker network setting and just start over from scratch?

     

     

  7. Can anyone tell me where to start looking for a "Error when trying to connect (Error occurred in the document service: Error while downloading the document file to be converted.) (version 6.3.1.32)" when I try to add the hostname for the document server to nextcloud?  Searching the internets, I have found some recommended fixes such as changing the following config files:

    Onlyoffice:
    default.json -> “rejectUnauthorized” to false.
    local.json -> “header”: “AuthorizationJwt” from “header”: “Authorization”
    supervisorctl restart all
    
    Nexcloud:
    config/config.php ->
    ‘onlyoffice’ => array (
    ‘verify_peer_off’ => true,
    ‘jwt_header’ => “AuthorizationJwt”
    )

     ... but this is basically a new install and it doesn't look like default.json is meant to be edited outside of the docker.  But /healthcheck/ returns true for both the IP address and when I try to connect via the external domain name so I think the container is up and running but the above fix suggests some part of the forwarding is broken and I don't know what tools might help me find it.

     

    My firewall forwards 443 to NginxProxyManager docker on Unraid.  NPM has a wildcard SSL cert for the domain and is set to forward nextcloud.domain.name and docserver.domain.name to the appropriate ports for their respective unraid dockers. 

    All 3 dockers are running on a docker proxynet (docker create network proxynet). 

    Both nextcloud and document server are accessible on the local network via IP:port and on the internet via domain name.

    Both cert and key PEM's are copied to .crt files

     

    I get the following in the docker log:

     

     

    Quote

     

    [2021-06-09T18:34:21.282] [ERROR] nodeJS - error downloadFile:url=https://nextcloud.domain.name/apps/onlyoffice/empty?doc=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhY3Rpb24iOiJlbXB0eSJ9.la3XO9qn6tmmWaNhPtzJXk2kMb0u_-gh6ZnwW-iFnY0;attempt=3;code:EPROTO;connect:null;(id=conv_check_1035115743_docx)

     

    Error: write EPROTO 23318690023232:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1550:SSL alert number 70

     


    If I manually cut and paste https://nextcloud.domain.name/apps/onlyoffice/empty?doc=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhY3Rpb24iOiJlbXB0eSJ9.la3XO9qn6tmmWaNhPtzJXk2kMb0u_-gh6ZnwW-iFnY0 it downloads a 7kb file new.docx that looks blank to me.

     

    As far as I can tell, everything seems to be working except the file won't transfer from inside the nextcloud docker.

     

     

     

     

  8. Mover has been running for hours but I only see a couple GB actually moved.

     

    iotop -o

    Total DISK READ :       0.00 B/s | Total DISK WRITE :     177.10 K/s
    Actual DISK READ:       0.00 B/s | Actual DISK WRITE:       0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                            
    30785 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.02 % [kworker/u8:5-events_power_efficient]
    15906 be/4 root        0.00 B/s    3.85 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    17443 be/4 root        0.00 B/s   34.65 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    31806 be/4 root        0.00 B/s    7.70 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    22041 be/4 root        0.00 B/s   11.55 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    17696 be/4 root        0.00 B/s    7.70 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    17796 be/4 root        0.00 B/s   53.90 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    16781 be/4 root        0.00 B/s   11.55 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    16782 be/4 root        0.00 B/s    7.70 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    19995 be/4 root        0.00 B/s   15.40 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
    16836 be/4 root        0.00 B/s   23.10 K/s  0.00 %  0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0

     

    Docker and VM are off.  Where do I look to see what mover is trying to do to figure out why it does not seem to be making progress?

  9. When designating the docker image and appdata folder locations, the help tooltip says the following:

     

    "It is recommended to create this folder outside the array, e.g. on the Cache pool. For best performance SSD devices are preferred."

     

    But the default is to create it on the array.  What is the difference, and is there a difference, between setting it to for example /mnt/somecachepooldevice/appdata/ vs the default /mnt/use/appdata/ and setting the share to cache only, set to that same cache pool drive under shares?  Are they equivalent? If not, which is preferred? 

     

    In some form or another, this question has been asked and answered many times:

     

     

    I see some ambiguity in terms of what works and what causes problems even though they should be one any the same.  The particular reason I am asking about this is that after moving my docker.img and /appdata/ folders to a new 2nd cache pool drive manually and setting the location to that drive manually in docker settings one docker does not seem to work: openvpn-as. Upon installing, its appdata folder is completely empty with the exception of two empty folders 'log' and 'etc'.  This was not moved from the initial appata location but was installed later for the first time. 

     

    If I mount the docker.img file, the command

    find /mnt/docker_temp/containers -name '*.json' | xargs grep -l /user/appdata 

    yields two files: hostconfig.json and config.v2.json both belonging to the container openvpn-as.

     

    So should I move everything back to /mnt/user/appdata from /mnt/cache2/appdata?  Or is this irrelevant and I should look elsewhere as to why there is nothing in the openvpn container?

     

  10. On 2/2/2021 at 12:24 PM, OFark said:

     

    I've been searching for this for ages, I've found no end of people complaining about this error and getting no response from the community.

     

    Perhaps this info could be relayed in the Unraid Log. "Have you left a console window open?"

     

    Agree! That is a particularly complicated sounding error for such a little thing. and it is easy to lose one of those little console windows behind everything else.  Maybe an option to open console and log windows as a new tab instead?

  11. I don't have an answer to your specific question, but hopefully this is helpful: I found DupeGuru's all-or-nothing results to be too unreliable and require too much user interaction. Instead, if you install the nerd pack plug in, there are two command line dupe checkers - fdupes and jdupes.  Install either (reportedly jdupes is faster) and the user scripts plug in.  Write a script to delete dupes and set it to run periodically.  Or, a little more complicated, but you can create a script to move the dupes and create a log of what it did and you can review everything and manually delete.  I can't access them right now, but I can post mine if anyone wants them.

     

     

     

     

  12. On 1/21/2018 at 1:27 PM, IanB said:

    I managed to stop, update, and restart manually doing the following:

     

    
    root@Tower:~# /etc/rc.d/rc.ntpd stop
    Stopping NTP daemon...
    root@Tower:~# ntpdate -s time.nist.gov
    root@Tower:~# /etc/rc.d/rc.ntpd start
    Starting NTP daemon:  /usr/sbin/ntpd -g -u ntp:ntp

     

    My time is now correctly in sync, but will have to now watch to see how it drifts out. Is there a way for me to validate that the NTP service is periodically updating from the remote servers?

    Thanks.  This fixed it for me.  Now to see if it goes out again...

  13. netstat -vatn was able to find the source of the problem.  

     

    Seems like there should be a server side solution to prevent this.  After a time a client anywhere with a bad CSRF token causes parts of Unraid to stop working - possibly from spamming the syslog. How does this work with multiple users?  Do administrators email all their users asking them to close their forgotten browswer tabs?

  14. 5 minutes ago, johnnie.black said:

    Yes, thank you.  You might notice is you read the second sentence is that I have seen that.

     

    However, as of right now I am only using one browser on one computer after a fresh reboot.  So do you mean I have to go find any and every instance of an open webpage on any computer I might have left open somewhere at any point in the past?  And any phone or tablet that has ControlR? Because this could cover a seriously lot of hardware and a lot of square miles to find!

  15. My syslog is overrun with wrong csrf_token errors generated from the unassigned devices plug in.  This starts immediately after reboot with only one web browser page open so the faq does not seem to be relevant:

    https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=545988

     

    It did not stop after uninstalling the plugin.

    It did not stop after reboot after uninstalling the plugin.

    There is no UnassignedDevices.php - at least in /boot/config/plugins/unassigned.devices

     

    May 17 08:11:06 NAS root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
     

     

  16. I cannot stop the array:

     

     Array Stopping•Retry unmounting disk share(S)
    May  6 09:51:37 HomeNAS emhttpd: Unmounting disks...
    May  6 09:51:37 HomeNAS emhttpd: shcmd (440): umount /mnt/cache
    May  6 09:51:37 HomeNAS root: umount: /mnt/cache: target is busy.
    May  6 09:51:37 HomeNAS emhttpd: shcmd (440): exit status: 32
    May  6 09:51:37 HomeNAS emhttpd: Retry unmounting disk share(s)...
    May  6 09:51:42 HomeNAS emhttpd: Unmounting disks...
    May  6 09:51:42 HomeNAS emhttpd: shcmd (441): umount /mnt/cache
    May  6 09:51:42 HomeNAS root: umount: /mnt/cache: target is busy.
    May  6 09:51:42 HomeNAS emhttpd: shcmd (441): exit status: 32
    May  6 09:51:42 HomeNAS emhttpd: Retry unmounting disk share(s)...
    May  6 09:51:47 HomeNAS emhttpd: Unmounting disks...
    May  6 09:51:47 HomeNAS emhttpd: shcmd (442): umount /mnt/cache
    May  6 09:51:47 HomeNAS root: umount: /mnt/cache: target is busy.
    May  6 09:51:47 HomeNAS emhttpd: shcmd (442): exit status: 32
    May  6 09:51:47 HomeNAS emhttpd: Retry unmounting disk share(s)...

     

    Rebooted - no change.

     

    Not sure what all I did to put unraid in this state.  In the process of removing a disk that had failed, I removed the wrong disk before correcting my error.  First I had this (https://forums.unraid.net/topic/91867-solved-can-not-mount-unassigned-drive-after-disk-drive-shuffle/?tab=comments#comment-852174) I probably inadvertently turned on pass through but that drive had been mounted before and I was clicking around in the first place because I couldn't get it to mount. 

     

    Also, I manually ran preclear on the new disk.  After preclear ran I thought I formatted it but I could easily have forgot and missed that step.  After preclear I stopped the array, added the drive and started the array.  Unraid automatically ran preclear on its own which took a couple more days.  After that finished I think I just tried to start the array. (I did not realize drive was unformatted.) Then unraid ran a parity check which took another day.  

     

    Now - not entirely sure what damage I have done - I think I just need to stop the array, format the disk and start it up.  But the array won't stop....

    homenas-diagnostics-20200506-1152.zip

  17. I can no longer mount an unassigned drive.  The option is greyed out.  It had been working fine until I went to replace a failed drive.  I accidentally removed the drive that I can no longer mount and installed the new one.  After booting up I realized my mistake, shut down and put this drive back on its same sata connector and moved the new  drive where it should have been in the first place.

     

    Now the drive is listed in unassigned but I cannot click on 'mount'.

     

     

    Screenshot 2020-05-02 at 14.23.51.jpg

  18. A question about Zabbix-Server and Zabbix-Webinterface - but perhaps a question more about dockers in general:  I installed both.  Server logs looks like it is running and the DB is being used.  However I can't configure anything at http://ip/zabbix as the page does not load.  I assume that is the webinterface docker's job - and if I look at the logs I see:

     

    2020/04/18 13:07:19 [emerg] 25051#25051: bind() to 0.0.0.0:80 failed (98: Address in use)
    
    nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)

    Obviously port 80 is already in use elsewhere.  The webinterface docker does not give me any port options when setting it up, but configures itself to listen on 80 and 443. This doesn't conflict with any other docker, but Unraid itself is using those and (thankfully) must keep them from the container.

     

    Here is where things get more unclear to me: Zabbix is using a subdirectory and not its own port or a subdomain.  Is this where I would setup nginx to route traffic to the proper place? Or is the container network type just setup differently - as bridge?  Or both: setup let's encrypt with nginx and zabbix-webinterface both on their own proxy-net?

  19. On 2/24/2020 at 3:16 PM, Squid said:

    No it's not.  This is a snip from a completely default install of traccar

    
    -p '5000-5150:5000-5150/tcp' -p '5000-5150:5000-5150/udp'

     

    I guess what I should have said was everything else was default.  I did not realize that I could not edit the ports and just restart a container.  In the end I was unable to even delete and rebuild with altered ports.  I have not yet taken the time to learn how this docker works internally but I get what you are saying now.  

     

    Thanks for all the time you take responding to those of us 'learning the hard way'.

  20. But everything is default -- so I could not see how I could be doing that!

     

    Since I was merely adjusting editing the port range I did not pay much attention to "Link to traccar.xml: https://raw.githubusercontent.com/traccar/traccar/master/setup/traccar.xml Add it to your host path before starting the container."  Turn out that even if you have created traccar.xml, docker installation modifies its files including deleting or moving the traccar.xml file. It looks as though you need to recreate traccar.xml each and every time you edit your docker.

     

    And since traccar.xml is a configuration file, my guess is editing and restarting the docker with a new file may well wipe out any configuration you had.  Back-up!