Jump to content

Freddie

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by Freddie

  1. Yep, I failed to consider a call like: diskmv "" disk1 disk2 , but that could be valid usage if you want to move everything from one disk to another, so I think I'll leave it. consld8 "" disk1 could also be valid, but I don't know if anyone would find that useful. $2 is not tested with null, but the subsequent variable $SRCDISK is tested to be a valid disk, so a null value in $2 is caught. Is there some reason I need to test $2 with null at first reference? Word splitting is not performed in case statements and assignment statements so case $1 in ... and SRCDISK=$1 are OK. But I will add some more quoting. Thanks for looking and commenting. I welcome more comments if you look deeper.
  2. I've created a couple of bash scripts to facilitate moving files between disks. As these utilities are touching user data, I'd like to have some experienced users review and/or test them. If there is interest and no major problems, I will look into making them easier to use (first a plugin for command line usage, then maybe gui) 'diskmv' will move a user share directory from one disk to another. It uses a find/rsync command similar to what is found in the standard mover script. It is suitable for merging a user share directory onto a disk that already contains that directory. Files that have duplicate file names on the destination disk will not moved. By default, 'diskmv' will run in test mode and display some information about how directories would be moved, but will not actually move files unless forced. 'consld8' can consolidate a user share directory from multiple disks onto one disk. If a destination disk is not specified, it will pick the best disk based on max usage and available space. By default, consld8 will run in test mode and display some information about how directories would be moved, but will not actually move files unless forced. 'diskmv' is required to actually move files. Example usage: diskmv "/mnt/user/video/tv/Pushing Daisies" disk4 disk1 consld8 /mnt/user/video/tv/Wonderfalls disk3 To get get a help message: diskmv -h I am tagging releases for code that I trust and have used on my real data. Releases are here: https://github.com/trinapicot/unraid-diskmv/releases The master branch has code that I have tested on sample data, but I have not necessarily used it on my real data. These utilities move files around on an unRAID server. I have done my best to prevent any data loss, but there is always a chance something can go wrong. Use at your own risk and please ensure you have everything backed up. Note for those wishing to use diskmv in the process of changing filesystems: The diskmv script will not verify that copied files are the same as the source files. With the default options, diskmv will rsync each file and if the rsync is successful, that file is then deleted before moving to the next file. There is no opportunity to compare the source and destination files. The script is relying on the OS, rsync and whatever else is involved to correctly read data from one disk and write it to another. If you still want to use diskmv to move all the files and directories from one disk to another, you can use this syntax: diskmv -f "" disk1 disk2 Or to keep source files (copy instead of move): diskmv -f -k "" disk1 disk2
  3. Thanks for posting bjp999. The script I downloaded a couple days ago has a preread_skip_pct parameter that is hard coded to 47. This results in almost half of the disk being skipped in the preread stage. I'm guessing that was used for testing and maybe the posted version should be revised to avoid any skipping.
  4. That happens because of the apostrophe in the file name. Reported as a defect here: http://lime-technology.com/forum/index.php?topic=34999.0
  5. Create the top level directory on any array disk. See this post for related limetech explanation: http://lime-technology.com/forum/index.php?topic=34481.msg322945#msg322945
  6. Similar to the go file, you can create a stop file in /boot/config/. It will be executed from /etc/rc.d/rc.local_shutdown.
  7. Instead of excluding files from the rsync, prevent them from being found. Then none of the following actions (-print,-exec rsync,-delete) will be performed. For example: root@tower:/mnt/cache/iso# find . -depth \( \( -type f \! -exec fuser -s {} \; \) -o \( -type d -empty \) \) -print ./test.txt ./test.mkv ./test.thumb ./test.nfo root@tower:/mnt/cache/iso# find . -depth \( \( -type f \! \( -name '*.thumb' -o -name '*.nfo' -o -exec fuser -s {} \; \) \) -o \( -type d -empty \) \) -print ./test.txt ./test.mkv
  8. Make sure you are running the most recent version of cachedirs. Previous versions went into an infinite loop when started from go file on unRAID 6. It keeps the boot process from finishing and the login prompt doesn't show up.
  9. I wasnt sure how to answer this and then I realised why. I dont think this is a best practice recommendation I think it is a feature request and perhaps even a bug report. Let me raise it as such. Thanks Thanks NAS. I'm curious why you concluded it was a feature request or bug report. The permissions of files created from inside a docker container are controlled by settings inside that docker container. If the container creates a file on a volume mapped from unRAID, the umask inside the container sets the permissions, not the unRAID umask. That is why I thought this should be included in the best practices. I've been playing around a bit and now have a specific example. Running a needo/couchpotato docker with an empty directory mapped to the /config directory results in these new files: -rw-r--r-- 1 nobody users 8822 Jul 31 13:50 config.ini drwxr-xr-x 1 nobody users 62 Jul 31 13:50 data/ These files are not writable by a user setup in the unRAID GUI. I cloned the needo/couchpotato git and added a "umask 000" line to the couchpotato.sh script. The results of running that image with another empty config directory: -rw-rw-rw- 1 nobody users 8822 Jul 31 13:35 config.ini drwxrwxrwx 1 nobody users 62 Jul 31 13:35 data/ These files have the same permissions as those that have been through the New Permissions utility.
  10. I think it would be a good to implement a more compatible umask setting in dockers that create files on unRAID. The default umask in unRAID is 0000 which results in files with wide open permissions (directories: 0777 or drwxrwxrwx, files: 0666 or -rw-rw-rw-). In phusion/baseimage the default umask is 0022 which results in files that are only writeable by the owning user (directories: 0755 or drwxr-xr-x, files: 0644 or -rw-r--r--). These more restrictive permissions can cause problems when user share security is enabled. Many of the applications that are run in these unRAID dockers have internal settings to control permissions, but it would be nice if we did not have to rely on all those separate internal settings. A few possible methods to implement this: [*]Set the umask for each application. For example, in phusion/baseimage, a "umask 0000" command could be added to the run script that starts the application. [*]Set the default umask for the user nobody. I don't know to do this but it seems like a good way to go if one were to create an unRAID baseimage. [*]Set the default umask for all users. In ubuntu 14.04 this is done in /etc/login.defs. This seems heavy-handed; it doesn't seem like a good idea to have all files created within the container to have wide open permisions.
  11. That is not what I have experienced. And from the comments of newperms script: # Here's a breakdown of chmod "u-x,go-rwx,go+u,ugo+X" # u-x Clear the 'x' bit in the user permissions (leaves rw as-is) # go-rwx Clear the 'rwx' bits in both the group and other permissions # go+u Copy the user permissions to group and other # ugo+X Set the 'x' bit for directories in user, group, and other There is only one setting for umask and it applies to both directories and files. 007 should work fine. I think for most use cases in unRAID, the "other" permissions don't matter too much. I have mine set to 002. It may cause a problem if you have an add-on that runs as a user which is not a member of the users group. It that case, a umask of 000 would be best.
  12. NZBGet has a similar setting under Settings - Security - UMask. Since this is a UMask setting, the value is kind of the opposite of the chmod settings in many other applications; you are specifying which permissions are not enabled in newly created files. A value of 000 will produce files with the same permissions as the unRAID New Permissions utility (files: 666, directories: 777).
  13. I can reproduce this error with headphones running in a docker container. The Home page of the headphones webgui writes a cookie with content that is almost 900 characters long and the Logs page writes a cookie that's over 600 characters long. These two cookies together prevent the unRAID webgui from loading. Delete either cookie and the unRAID webgui loads fine again.
  14. I'm just exploring syslogs, but it looks like you can search for the first occurrence of "sd 15" and get this: Jun 26 21:25:34 Tower kernel: scsi 15:0:0:0: Direct-Access ATA Hitachi HDS72202 JKAO PQ: 0 ANSI: 5 Jun 26 21:25:34 Tower kernel: sd 15:0:0:0: [sdj] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Then search for "] (sdj)" and get this: Jun 26 21:26:08 Tower kernel: md: import disk8: [8,144] (sdj) Hitachi_HDS722020ALA330_JK11B1YAJGH14V size: 1953514552
  15. I get the same slow mv on both 5.0.5(reiserfs) and 6.0-beta6(btrfs). From 5.0.5: root@Tower:~# ls -l /mnt/cache/.custom/ total 1049600 -rw-rw-rw- 1 nobody users 1073741824 2014-06-26 09:55 af root@Tower:~# time { mv -v /mnt/cache/.custom/* /mnt/user/dwnld/; } `/mnt/cache/.custom/af' -> `/mnt/user/dwnld/af' removed `/mnt/cache/.custom/af' real 0m21.119s user 0m0.020s sys 0m1.500s root@Tower:~# ls -l /mnt/cache/dwnld/af -rw-rw-rw- 1 nobody users 1073741824 2014-06-26 09:55 /mnt/cache/dwnld/af I have no idea what could cause the difference between kal's results and mine.
  16. How did you disable the cache drive? I also removed a cache drive, but it is still enabled in the shares configuration file. You might want to check \\tower\flash\config\share.cfg and see if shareCacheEnabled="yes". This setting is not visible in the webgui when no cache drive is assigned.
  17. The issue with cache_dirs not woking properly when invoked from the go file on unRAID v6 is due to the first few lines in the cache_dirs script: #!/bin/bash if test "$SHELL" = "/bin/sh" && test -x /bin/bash; then exec /bin/bash -c "$0" "$@" fi The SHELL environment variable starts out as "/bin/sh" and does not change when executing /bin/bash. So it gets stuck in a loop. When invoked from a standard shell, the SHELL environment variable is "/bin/bash" and it works just fine. The test can be modified like this: if test "$BASH"x = "x" && test -x /bin/bash; then This works on unRAID 6.0-beta5a booted with Xen. I was thinking of other required test cases, but I'm not sure what this chunk of code is supposed to be doing. It appears to ensure the script is invoked with bash, but doesn't the first line (#!/bin/bash) do that on its own? I would think the other three lines could just be removed.
  18. Looks like that goflex USB enclosure translated the sector size. See here for details: http://forums.justlinux.com/showthread.php?153881-3TB-hard-disk-used-as-external-USB-connection-or-internal-Sata-connection Try connecting the drive through the original enclosure through USB to unraid. Use ntfs-3g-2013.1.13-x86_64-1.txz to mount it. It is 64 bit like unraid 6. The i486 version is 32 bit and not compatible with unraid 6.
  19. I have not yet succeeded in creating a Slackware 14.1 64-bit VM, but along the way, I learned I can chroot into my not-quite-a-VM filesystem and build packages. Attached is a multitail 64-bit package (remove the .txt extension after download). It works on unRAID 6 beta4. Edit: fixed typo multitail-5.2.13-x86_64-1_SBo.tgz.txt
  20. I can incorporate it...Should be able to do it this weekend. Joe L. I just saw the preclear post has been updated with version 1.15. Thank you Joe L.
  21. I noticed an oddity in your syslog: a couple of your 4TB drives are slightly smaller than the other two. Have these drives been connected to a gigabyte motherboard? You might have an HPA on these disks. I don't see how this could be related to your parity check issue, but you might want to check it out.
  22. I assume yes - either applied at specific lines or to the whole script. I was going to let Joe L. incorporate it, but I can work on it if he wants. I wouldn't. It has the possibility to effect too many other things.
  23. A possible fix is to use the environment variable LC_CTYPE=C In unRAID 6: root@uwer:~# LC_CTYPE=C awk 'BEGIN{ printf ("%c",0x80)}' | hexdump 0000000 0080 0000001 My understanding is that LC_CTYPE=C specifies ascii character encoding instead of UTF-8. It worked with my fake_clear script on a 4TB drive. If anyone wants to try it out for real you can execute the preclear script as normal except prepend the LC_CTYPE=C LC_CTYPE=C preclear_disk.sh /dev/sdX
  24. Because sometimes the values in the partition data do not have any bytes over 127.
  25. I isolated the problem down to the code in preclear that generates the partition data. The printf statement in awk is used to convert a byte value to a character. In unRAID 6, it seems that any byte over 127 (or 0x7f) is converted into a 2 byte character. For example, in unRAID 6: root@uwer:~# awk 'BEGIN{ printf ("%c",0x80)}' | hexdump 0000000 80c2 0000002 root@uwer:~# awk 'BEGIN{ printf ("%c",0x7f)}' | hexdump 0000000 007f 0000001 Compared to unRAID 5: root@Tower:~# awk 'BEGIN{ printf ("%c",0x80)}' | hexdump 0000000 0080 0000001 root@Tower:~# awk 'BEGIN{ printf ("%c",0x7f)}' | hexdump 0000000 007f 0000001 I think this results in obviously wrong partition data on all disks larger than 2.2TB and also some disks on the smaller side. I also think it could result in more subtle problems on smaller disks. I am starting to think about ways to generate partition data without using awk.
×
×
  • Create New...