Jump to content

FlorinB

Members
  • Posts

    125
  • Joined

  • Last visited

Posts posted by FlorinB

  1. 37 minutes ago, SSD said:

    Provide a reproducible set of commands in Windows that results in a file trying to overwrite itself when the copy command is used on a single machine.

    Test 1 - try to copy file to itself

    Quote

    c:\test1>copy test_copy.txt c:\test1\test_copy.txt
    The file cannot be copied onto itself.
            0 file(s) copied.

    Test 2 - try to copy file to a symbolic link pointing to itself

    Quote

    c:\test1>copy test_copy.txt C:\test1\test_copy.txt.lnk
    Overwrite C:\test1\test_copy.txt.lnk? (Yes/No/All): yes
            1 file(s) copied. -> result the symbolic link is replaced with the content of test_copy.txt, not pointing anymore as a symlink to the original file

    Is this a real example when on Windows OS a symbolic link is altered? Yes.

    What is happening in this situation? When the original file is changed, the symbolic link remains with the content of the file previously copied.

    Is this a Windows filesystem bug?

  2. Have you followed similar steps like this on your router?

     

    To see the OpenVPN-AS log you have to start the Docker of OpenVPN-AS terminal, type bash - this will make you life easier and search where the openvpn.log file is

    Quote

    # bash
    root@Tower:/openvpn# find / -type f -name "*.log" 2>/dev/null
    /usr/local/openvpn_as/init.log
    /var/log/alternatives.log
    /var/log/apt/history.log
    /var/log/apt/term.log
    /var/log/bootstrap.log
    /var/log/dpkg.log
    /config/log/openvpn.log <- this is your log
    /config/init.log

    After this you can do the tail -f  /config/log/openvpn.log to see the log, but this will help you after you have correctly configured your router and the vpn related stuff.

  3. 9 minutes ago, Annie SIxgun said:

    Right now it looks to me as if the router has forwarded the connection to itself.

    Please do not confuse tcp with udp. The web pages are served via TCP protocol. 

    Did you tried to access the port 1194 in your web browser from your LAN or from an external IP address?

     

    You can start your investigation by looking into the router log files.

    Then check into unRaid the following:

    - that the OpenVPN-AS docker container is started and configured correctly.

    - check into unRaid that the port 1194 is open for upd protocol using the terminal from Web GUI or ssh to unRaid

    Quote

    root@Node804:~# netstat -anp|grep 1194
    udp        0      0 192.x.y.110:1194      0.0.0.0:*                           24921/openvpn-opens
     

    - be sure that you have created the vpn user, exported the .ovpn config and imported it into your VPN client. I am using a mobile phone with 4G data enabled instead of local LAN.

    - try to connect using OpenVPN client with the imported profile to your pulbic Internet IP address/Hostname (take care, when the ovpn profile is generated is pointig to the hostname. If your hostname is a dynamic IP it will work only for short time)

    - if the connection from the OpenVPN Client to the server is successful you should be able to see something like this into the OpenVPN-AS Web UI.

     

    At which step are you @Annie SIxgun?

     

    Attention: Do not expose the unRaid Web UI or the ssh directly into Internet!

     

  4. 3 hours ago, Annie SIxgun said:

    When I try to access the server with my browser, I am presented with my router sign in page.

    Normally the router should not be reachable in Internet.

    You have to map into your router the port 1194 pointing to the unRaid IP (I assume you are talking about the Docker OpenVPN-AS)

    Have you already watched this tutorial?

     

    Additionally, if you do not have a static ip, you will need to use a dynamic DNS service like NoIP, DynDNS, DuckDNS.

  5. 5 hours ago, pwm said:

    I really don't like the too much used word "bug" in this context. In the general case, it isn't a bug in unRAID or in Linux but a problem that affects most (all?) OS when the same file is allowed aliased names. This makes it impossible for the program that performs the copy to notice that the source and destination names points to the same file.

    When you try to copy a file to itself and this ends up badly, with data loss, then for sure it is not the OS fault. I would not name it "bug" as well.

     

    Anyway this topic together with the other one from here, helped me to understand better what and how exactly this "bug" is behaving.

  6. 3 hours ago, jonathanm said:

    No, because UD shares can not participate in user shares.

    Those are good news.

    At this moment on my 4TB Array I have a lot of 500GB disks with 2 x 1.5TB for parity.

    I am planning replace gradualy in time the 500GB with 4TB WD RED disks, but to add a 4TB disk into array i will actually need 3x4TB disks (2 for parity), which is impossible for me at this moment to buy at once. Since the Unassigned Devices Plugin is giving us the possibility to mount out of array disks, I will use the first and the second 4TB disks outside of the array, to backup the array or to save additional data. 

     

    3 hours ago, jonathanm said:

    The root of this issue is that /mnt/user/share/folder/file.txt is the SAME FILE as /mnt/disk1/share/folder/file.txt, but it appears in two different paths.

    Clear enough and somehow expected behaviour. I would not name it a bug.

     

    Thank you jonathanm ?

  7. I know that withing the array you should not mix copying/moving files from disk to share. It should always be used disk to disk or share to share, due to a bug which is leading to data corruption.

     

    Do we have the same issue with data corruption in the below copy/move situations?

    1. /mnt/user/SHARE to /mnt/disks/UNASSIGNED_DISK

    2. /mnt/diskX to /mnt/disks/UNASSIGNED_DISK

    3. /mnt/disks/UNASSIGNED_DISK mount to /mnt/user/SHARE

    4. /mnt/disks/UNASSIGNED_DISK /mnt/diskX

     

    Note: Using the Plugin Unassigned Devices with Destructive Mode:Enabled it is possible to format and use the hot spare disks outside of the unRaid Array.

     

     

  8. 3 minutes ago, jonathanm said:

    Originally [...] the entire array would be unavailable for the time needed to write zeroes the the drive so parity would remain valid

    Good reason for preclearing earlier.

    4 minutes ago, jonathanm said:

    limetech changed the timing of the array start with the addition of a new drive to a protected array, now the writing of zeroes is done in the background while the array is allowed to start normally

    Now the array starts immediately.

     

    Preclear might be used optionally for drive testing purposes.

     

    Thank you very much for the detailed clarifications jonathanm.

  9. 42 minutes ago, ashman70 said:

    I am not clear on how to resolve and I am unsure if they are part of the problem

    The Fix Common Problems warnings are telling you everithing you need to know:

    appdata, Applications, downloads - should be (according to your configuration) on the cache drive, but somehow you move them to the array. 

    Docker folder should not use the cache at all, the a folder Docker exist (empty or with content) on you cache drive.

    You can use the tools mentioned in warnings to move your files where it should be. Always copy a file from a disk to a disk and from a share to a share!!! Do not mix disks with shares - there is a bug which will dostroy your data, if you do not respect that rule. See the following post for more details:

     

  10. 23 minutes ago, ashman70 said:

    So when I run that, this is what I get.

    Please do a df -h /mnt/*

    In my case this is what i get:

    Quote

    root@Node804:~# df -h /mnt/*
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdi1       233G   73G  160G  32% /mnt/cache
    /dev/md1        466G  301G  164G  65% /mnt/disk1
    /dev/md2        466G   51G  414G  11% /mnt/disk2
    /dev/md3        466G  389G   78G  84% /mnt/disk3
    /dev/md4        466G  185G  280G  40% /mnt/disk4
    /dev/md5        466G  171G  295G  37% /mnt/disk5
    /dev/md6        932G  838G   94G  90% /mnt/disk6
    /dev/md7        466G  426G   41G  92% /mnt/disk7
    rootfs          7.8G  656M  7.1G   9% /mnt
    shfs            3.9T  2.4T  1.5T  62% /mnt/user
    shfs            3.7T  2.4T  1.4T  64% /mnt/user0

    You can see the disk usage of cache drive as well as the disks from Array.

     

  11. Another way to see with what is full your cache drive is to open a Terminal from your Web GUI and run the following command:

    image.png.57d39bdc39fadf4b87f0ca0d85e38573.png

    Quote

    root@Node804:~# du -sh /mnt/cache/*
    4.3G    /mnt/cache/appdata
    33G     /mnt/cache/domains
    3.4G    /mnt/cache/share4all
    21G     /mnt/cache/system
     

    In my case the cache is used by appdata, domains, system and share4all - one of the shares which should be moved automatically to Array at a specific disk usage threshold.

    • Thanks 1
  12. 26 minutes ago, SSD said:

    Not sure why you say that. You're not moving from a broken disk - you are moving from a simulated disk - which basically means some updates to parity. And very minimal since deleting a file just marks it as deleted.

    I mean that Copy instead of Move would be preffered. In case something goes wrong, with Copy you still have the data from the broken/simulated disk unaltered.

  13. 4 hours ago, SSD said:

    So if you copy (or move) a file from a disk share to a user share, the user share will think you are overwriting an existing file. So you are basically trying to copy a file overtop of itself[...]

     

    A rule of thumb is to always copy disk share to disk share, or user share to user share. Do not mix.

    Very useful rule. I was already doing this intuitively, but I was not yet in a situation to have one or more disks failed.

     

    4 hours ago, SSD said:

    But I'll give a tip that would allow you to safely copy from a disk share to a user share. Go to the disk share, and RENAME the root level folder. Say it was called "Movies". Change it to "X" or "MovieTemp" or anything that is different from "Movies" and not the name of some other user share. [...] You also need to make sure that the user share configuration excludes (or does not include) that disk. You can then copy from that disk share to the user share. Or, copy from the user share "X" or "MovieTemp" or whatever you call it, to the "Movies" user share. 

    We are realizing that copying new data with some disks broken will be slower, but if there is no network share or unused disk available that is the only way.

    Do we have to test or is working 100%?

    4 hours ago, SSD said:

    It is not necessary to Move, Copy is fine.

    Move from the broken disk is the last thing you want...It must be Copy!

     

    If this is working as you said SSD, definitely should be included into the unRaid manual under a section like Shrink Array with Broken Disk(s).

     

  14. 5 hours ago, Squid said:

    since you have Folder Caching disabled

    Sorry. I was played earlier with the settings and forgot to enable back the folder caching.

    With the folder caching enabled those are the top 5 processes

    Quote

    root@Tower:~# ps aux | sort -nrk 3,3 | head -n 5
    root      8212 19.0  0.0 702528 15508 ?        Ssl  Jun20 339:02 /usr/local/sbin/shfs /mnt/user -disks 255 2048000000 -o noatime,big_writes,allow_other -o remember=0
    root      6457  6.2  0.0  11972  2608 ?        SN   11:32  37:25 sh
    root      7464  3.6  0.0      0     0 ?        S    Jun20  64:19 [unraidd]
    root      6244  2.1  0.0      0     0 ?        S    Jun20  37:27 [mdrecoveryd]
    nobody   24372  1.7  0.0 172468 11944 ?        S    10:46  11:04 nginx: worker process
    root@Tower:~#
     

    Now my CPU load is fluctuating between 15 - 40%, in comparison with 2-5% whithout folder caching enabled. See the highlighted zone.

    image.png.de893f60d2848c56f3a53a1a8997799f.png

     

  15. 10 hours ago, jonathanm said:

    This method does not keep the drive's data within the array. If the drive to be removed has data you want to stay in the array, you must move it yourself to the other data drives. Parity will be built based entirely and only on the remaining drives and their contents.

    There are 3 possibilities:

    1. copy the emulated data from the failed disk(s) somewhere over the network

    2. if you have a cache drive big enough - disable the mover and copy the emulated data temporary to the cache disk

    3. if you have hot-spare disk(s) install the Unassigned Devices plugin, set Destructive Mode enabled and format one of your hot-spare disks, then copy the data from emulated disk there.

     

    Of course after fulfilling one of the above steps you have to shrink your array, rebuild parity and copy your data back to the array.

     

  16. On 1/19/2018 at 4:38 PM, nuhll said:

    It seems like it was caused by dir cache plugin.

    I have the same issue on my system. The directories appdata, domains and system are on my cache drive. After excluding those 3 folders from Folder Caching the system load returned to normal.

     

    image.png.5ee2b0e3ec0a22cbd2d5cfedf235faa4.png

    Additionaly I had adjusted the Maximum interval between folder scans (sec)

     

    Please tag this topic with (SOLVED) if this solved your issue.

     

     

    • Like 2
×
×
  • Create New...