wsume99

Members
  • Posts

    531
  • Joined

  • Last visited

Posts posted by wsume99

  1. 16 hours ago, wsume99 said:

    Today I upgraded to the latest unraid release (6.9.2). I also added a password to the root user account. Now the user script that I have controlling the array fan stopped working. I copied the output from the script below when it runs. Any recommendations on where to start?

    
    Script location: /tmp/user.scripts/tmpScripts/Fan Speed/script
    Note that closing this window will abort the execution of this script
    Disk /dev/sdc current temp is 0
    Disk /dev/sdd current temp is 0
    Disk /dev/sde current temp is 0
    Disk /dev/sdf current temp is 0
    Disk /dev/sdg current temp is 0
    Disk /dev/sdh current temp is 27
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 93: /sys/class/hwmon/hwmon1/device/pwm2_enable: Permission denied
    cat: /sys/class/hwmon/hwmon1/device/pwm2: No such file or directory
    cat: /sys/class/hwmon/hwmon1/device/fan2_input: No such file or directory
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 99: *100: syntax error: operand expected (error token is "*100")
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 104: [: : integer expression expected
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 106: [: : integer expression expected
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 128: [: : integer expression expected
    
    DONE

     

    I looked around this morning trying to solve the issue. I ran pwmconfig and the array fan can be controlled but there are no devices in sys/class/hwmon/hwmon1/device/. It appears something has changed in the new version and I can't figure it out yet. I reverted back to 6.8.3 and everything works correctly. More research is in my future.

  2. Today I upgraded to the latest unraid release (6.9.2). I also added a password to the root user account. Now the user script that I have controlling the array fan stopped working. I copied the output from the script below when it runs. Any recommendations on where to start?

    Script location: /tmp/user.scripts/tmpScripts/Fan Speed/script
    Note that closing this window will abort the execution of this script
    Disk /dev/sdc current temp is 0
    Disk /dev/sdd current temp is 0
    Disk /dev/sde current temp is 0
    Disk /dev/sdf current temp is 0
    Disk /dev/sdg current temp is 0
    Disk /dev/sdh current temp is 27
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 93: /sys/class/hwmon/hwmon1/device/pwm2_enable: Permission denied
    cat: /sys/class/hwmon/hwmon1/device/pwm2: No such file or directory
    cat: /sys/class/hwmon/hwmon1/device/fan2_input: No such file or directory
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 99: *100: syntax error: operand expected (error token is "*100")
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 104: [: : integer expression expected
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 106: [: : integer expression expected
    /tmp/user.scripts/tmpScripts/Fan Speed/script: line 128: [: : integer expression expected
    
    DONE

     

  3. 50 minutes ago, Squid said:

    You don't need to click run in background.  All you have to do is set the cron and hit apply down at the bottom.  As to why it's doing what it's doing no one can answer without seeing the exact script.

    Thanks for the reply. The script was working just fine all I needed was to get it running in cron. I was clicking Run In Background instead of Apply.  Once I used Apply it was loaded into Cron and persistent across a reboot. Everything is working just like I need now. Thank you

  4. I have a working script that I'm now trying to run every 2 minutes via cron. I selected the Custom option from the scheduled drop down menu and enter */2 * * * * as the Custom Cron Schedule value and then click on Run In Background. A pop-up window opens telling me the script is running in the background which I then close. I have modified my script temporarily to write entries into the log file every time it runs. I know the script runs once because I see the output in the log file but it but does not run again. My settings do not remain after reboot or even if I navigate away from the User Scripts management page and then come back. I'm not sure how to fix this issue. What am I doing wrong here??

  5. Just upgraded to 6.8.1 and for whatever reason the code in my go file that loaded my custom array fan speed script into cron is now broken. Searching for cron help lead me to User Scripts, which I already had installed but was not using as a way to schedule my fan speed script. I'm trying to run a script on a custom cron schedule to have the script run every 2 minutes. I want this script to run automatically whenever the server is powered up and regardless of array state. A quick search of the forums didn't catch any posts on how to enter a custom cron schedule for a script. Do I select Custom from the schedule menu then enter "2 * * * *" into the Custom Cron Schedule box and then select either Run Script or Run Script in Background?

  6. My original post:

    On 1/7/2019 at 12:31 AM, wsume99 said:

    I am using UD to mount and share a 4TB external drive on my server. I am trying to rsync files from my array to the drive. Here is the command to initiate the transfer:

    
    rsync -av /mnt/user/Photography/2005/ /mnt/disks/EasyStore_4/2005/

     

    When the rsync finishes it displays the following message:

    
    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1189) [sender=3.1.3]

    The all are of the following type:

    
    rsync: chown "/mnt/disks/EasyStore_4/2005/12-25-2005/.12-25-05(2).JPG.nkTWa8" failed: Operation not permitted (1)

     

    I compared the source and destination paths after the operation and it looks like all the files were transferred - the count of files and folders along with total size are all equal. I searched and found a post where the discussion was about how the volume being written to was mounted and the root user wouldn't have permission to set ownership on a that volume. The suggested remedy was to run rsync -rltgoDv instead of the -a option. Would this correct my issue or is there a better way to correct this so I can run rsync without any errors. Having any error at the end of an rsync is not what I'd prefer to have.

     

    And the original reply:

    On 1/7/2019 at 2:40 AM, johnnie.black said:

    That's likely the best way, you can also keep -a and just add the --no-perms flag, e.g.: rsync -av --no-perms etc

     

    Resurrecting this problem. I've been fighting issues with some new hardware and after a lot of work I'm back to my old MB/CPU to try this again. I have tried both remedies (rsync -rltgoDv and rsync -av --no-perms) and neither prevent the errors I outlined above. Any more suggestions?

  7. 2 hours ago, Benson said:

    The reason I ask is because try on different USB SATA bridge should be easy.

     

    I use a UASP 5 bay enclosure with unraid about 1 yrs+, unraid won't support UASP. Those disk were array member and never have dropout. During put that in live, I make serious test under windows environment on same unraid machine first. I also try different kind of bridge and in general not much different found on windows or linux if talking about stability.

     

    I found problem mainly on USB controller / bridge issue more then OS issue. There are so many differe controller / bridge in market.

    If I understand what you are suggesting correctly I have already done this. The controller on the motherboard is different than the controller on the pci-e card. I have the same problem when I'm connected to either controller.

  8. 5 hours ago, Benson said:

    Have you try different USB-SATA bridge ? ( undersatnd no issue on mac and windows machine )

    I have not. I was searching last night about USB3 problems in linux and there are a lot of posts on various distros and hardware where users had problems similar to mine. It appears that the kernel is very buggy with USB3 devices. Thanks for the suggestion I'll look into it.

  9. My problem - when I try to read from or write to a portable HDD over USB3 I get random hangups in the transfer. This is happening on both the on-board ports as well as a pcie USB expansion card.

     

    Background: I decided to do a hardware refresh on my server. I purchased a Supermicro X10SAE motherboard and an e3-1226v3 off ebay, both items were used. Everything else RAM, PSU, SATA cables, fans, etc were all existing hardware that I had and was not experiencing any issues with. As part of this refresh I also purchased 2 x 8TB HDDs to replace a 2TB and 3TB drive that were already in my array.

     

    After I got the hardware up and running I ran MS prime for 24 hours with no errors. I then installed the two new 8TB drives and a spare 2TB drive I had into the server and ran simultaneous preclears on all three drives until I had cleared both new drives 3 times. No issues were noted at all while running the preclears.

     

    I then moved the drives from my existing server into my new server. Three of the six drives in my array were still on RFS so I used the two new 8TB drives to go through the process of converting the three RFS drives over to XFS. Once that was complete I re-organized a pretty disorganized photography library. I mention all this because in order to do all of these things I exclusively used rsync via the terminal to move a ton of files around between array drives. In total I'd estimate that I performed around 15TB of total rsync transfers between array drives. I had no issues at all during this time.

     

    Once I had finished all of this on the server I then tried to add some files from a HFS+ formatted drive onto the array. That is when the problems started. I was transferring the files using rsync and the operation would just hang. The progress on terminal would just stop and the disk throughput on the main GUI would drop to zero. Nothing was reported in the log file. I took that same portable HDD, plugged it into a macbook and transferred about 500GB of files onto my array over the network using rsync from the mac with no issues. I thought it might be a problem with the HFS+ formatting but maybe it wasn't, see the next paragraph.

     

    At that point I now had all of my photo library organization done and I wanted to backup some files on my array to a different portable HDD. I formatted this HDD using exFAT. Using rsync I started copying files onto the portable HDD but the same thing happened - random hangs, I/O drops to zero on GUI, no errors in logfile. Again this happens with both onboard USB3 as well as the pcie USB expansion card. I plugged that same HDD into a windows machine and wrote ~250GB of files to it with no issues.

     

    So I am now scratching my head. I'm thinking it has to be one of the following possible issues:

    1) Bad motherboard (something in the PCH)?

    2) Driver or kernel issue with USB3? I doubt this is the issue because the pcie expansion card was in my old server and worked without issues for large file transfers just prior to the start of my hardware refresh.

     

    I'm eliminating the CPU and RAM because how could they only have problems when performing USB3 transfers but not SATA transfers?? Maybe I'm wrong here.

     

    My plan is to swap the array back into my old server (different MB, CPU, RAM, PSU, and case) and see if the problem goes away. After all that "background" I guess I'm just looking for any other ideas that the community might have as to how to isolate the culprit. Am I missing something? Any feedback would be greatly appreciated.

  10. I am using UD to mount and share a 4TB external drive on my server. I am trying to rsync files from my array to the drive. Here is the command to initiate the transfer:

    rsync -av /mnt/user/Photography/2005/ /mnt/disks/EasyStore_4/2005/

     

    When the rsync finishes it displays the following message:

    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1189) [sender=3.1.3]

    The all are of the following type:

    rsync: chown "/mnt/disks/EasyStore_4/2005/12-25-2005/.12-25-05(2).JPG.nkTWa8" failed: Operation not permitted (1)

     

    I compared the source and destination paths after the operation and it looks like all the files were transferred - the count of files and folders along with total size are all equal. I searched and found a post where the discussion was about how the volume being written to was mounted and the root user wouldn't have permission to set ownership on a that volume. The suggested remedy was to run rsync -rltgoDv instead of the -a option. Would this correct my issue or is there a better way to correct this so I can run rsync without any errors. Having any error at the end of an rsync is not what I'd prefer to have.

  11. After more searching it looks like this is actually a known issue with the linux module that handles the hfs+ filesystem. Rsync is hanging due to a problem reading the hfs+ filesystem. I guess I just got lucky before when using this drive. Regardless, I'm switching to exFAT formatting on all my portable hard drives now since I have windows, linux and mac machines. Hopefully that will fix the problem.

  12. I'm beginning to think this issue is being caused by something on the server.

     

    The files on this drive were all written from a mac laptop. I plugged the HDD into the laptop and the laptop to ethernet and then rsynced the files from a terminal on the mac. I ran overnight without any problems. Of course it is slower copying over the network as compared to copying files onto the array from a device connected directly to the server. However since the HDD can be read by the mac without issues that leads me to believe the issue is on my server. Now if I can just figure out what the problem is. 😂

  13. I running v6.6.6 and am trying to copy ~600GB of data (DSLR photos) off a portable USB3 HDD onto my array. I'm using a rsync -avPX command via a terminal session to copy files from the drive (UD mounted) into a duplicate folder on my array. I have had the drive plugged into both USB3 and USB2 ports on the MB as well as a USB3 pcie card that I previously used in my old server. The problem persists no matter how the drive is connected. The transfer hangs on random files. If I close the terminal and open another and re-initiate the rsync it will start copying again and then not to long afterwards it will get stuck again. I used this same drive several weeks ago to copy nearly 1TB of files onto my old server using the same USB3 pcie card with no problems. I've searched the forums and have not yet found a solution or even anyone reporting a similar issue. I'm still trying to determine if this is an issue with my portable HDD or something on the server side. Any guidance would be appreciated. Is there a way to run a smart report on the portable HDD thru unraid?

  14. I needed more drive slots so I bought a new case. I figured why not just upgrade my MB and CPU while I'm at it. I had a spare stuff lying around to make a second server so that is what I'm in the middle of ATM. I bought a used supermicro X10SAE motherboard and an e3-1226 v3 CPU for $141 total. I am reusing a functional PSU and RAM along with misc fans, etc. I want to stress my new hardware to make sure everything is functional. Based on my research my plan is to complete the following:

     

    1) 24 hr memtest (currently underway)

    2) 4 hr prime CPU stress

    3) I have 2 x 8TB HDDs that are new and one old 2TB HDD I plan to install and run a series of simultaneous preclears on. I plan to move them around on the on-board SATA ports between passes.

     

    Anything else I should do to make sure my new setup is good to go?

  15. I'm at a loss with NZBGet and need some guidance. I've done quite a bit of searching/reading but can't seem to figure out what my problem is. I found a few things along the way but none of them fixed my issue.

     

    Background: I recently deleted my flash drive by accident and had to re-setup my server. I'm now running 6.6.5. I have all my docker containers installed on a non-array drive. I used to mount this drive via some code in my go script but as part of the rebuild process I switched to the UD plugin. I am reusing the container I had previously so the NZBGet config file has not changed. All that has changed is the docker path mappings because the path has changed since I moved to UD. I did adjust to RW/slave settings for the folders that are on the drive mounted by UD as specified in the UD thread.

     

    My problem: NZBGet is running and I can connect to the webgui. There is a nzb in the queue that says it is downloading but no progress is being made and the download speed is listed as "0 KB/s". I deleted the nzb and then re-added it to the queue and no change. I added a second nzb to the queue and tried to download it and same behavior. I've checked all the container mappings a ton of times and I cannot find anything out of order. I'm not sure where to look. I've included some screenshots of my docker mappings. I had a NZBGet working just fine and all I (think) that I've done is just change the name of the drive and the mounting process for the drive containing the docker image. This leads me to believe I have something messed up in the configuration of the docker container but I could certainly be wrong. Hopefully someone can help me get this sorted out. 

     

    image.thumb.png.d22b2e1b623d4c07c288ba00cc8a42bb.png

     

    image.thumb.png.22295cdc875cdc1806edda62b6f877d9.png

     

    image.thumb.png.efe5f724c158fef6efcd7083fe4c96ab.png

     

  16. I'm looking for advice from the community on the automation of home video importing and organization. I've been reading quite a bit and while I am certainly a bit more informed I'm still uncertain of the best way to proceed.

     

    All of my home videos are currently stored on my server. I have the files separated into folders corresponding to the date the video was taken. Some folders have a number of files in them and others are just a single file. I'm pretty happy with the arrangement. My only problem is that copying files over to my server from the SD cards is a PITA and I'm looking to automate this process as much as possible. So my objective is to insert the SD card into my server and automatically copy all the files into newly created folders in my home videos share that correspond to the date the video was taken. Feedback thru the webUI and in the syslog would be ideal.

     

    My first plan was to utilize the Unassigned Drives plugin to auto mount the SD card and then run a shell script to copy the files over to the array. There is even a sample script in support thread for the plugin to do this except it only copies the files to a single folder. I started searching and found a python script on stack overflow that claims to do exactly what I need for folder creation but I'm not sure how to implement that into a shell script via UD. So more research is needed.

     

    I also found a docker container for digiKam is available. I have never used this program before but it claims to be able to import files off SD cards and create folders during the process. Not sure if I can set it up to run in the background and automatically import the files. I'd prefer to not have to do anything other than insert/remove the SD card.

     

    So I am basically just looking for advice from the community on how to best achieve the functionality I am looking for. I'm open to other ideas or approaches that achieve my goals thru alternate methods and I'm not afraid to put in some work to achieve them.

     

     

  17. I knew someone would say "just use the UD plugin" and I looked at it last night but was too tired to figure a new plugin out. I have had my system configured this way for at least 5 years without any problems until now and the problem is that I deleted the smb-extra.conf file. I figured I could either reconfigure all my docker containers or simply edit the smb-extra.conf file so that the mount is shared. At the time fixing my smb-extra.conf seemed simpler. After sleeping on it I re-installed UD, created a new share and switched over all my docker mappings which was actually not bad at all. Problem solved.

  18. I'm in the middle of rebuilding my server (v6.6.5) because I accidentally erased my flash drive (🙄) and have run into a snag with an unassigned drive that I have apps/dockers installed on. I can see the network share but windows is asking me for a username/password if I attempt to access it. I can open all my other shares.

     

    I have the following entries in my go file:

    mkdir -p /mnt/disk/sdf1
    mount -t reiserfs -o noatime,nodiratime /dev/sdf1 /mnt/disk/sdf1

     

    And I have added this to the smb-extra.conf file:

    [sdf1]
    	path = /mnt/disk/sdf1
    	read only = no
            valid users = whoever
            write list = whoever 

     

    I've been searching the forum for a while and I cannot figure out what I'm missing. I'm pretty sure its simple but I don't have a clue. Any help would be appreciated.

  19. Did a little reading on UFS Explorer. There are several versions. It looks like the standard version would meet my needs. It can restore accidentally deleted files and it works with XFS and ReiserFS as well as FAT32 🤔🤔.  So an interesting thought popped into my head. Why couldn't I also use the software to recover the files that were deleted from my flash drive? Seems reasonable to me. It would probably save me several hours of setup time getting the system back up and running, shares setup and all my apps installed and reconfigured. Any reason not to try that?

  20. 4 hours ago, trurl said:

    Not sure what a "partially deleted file" would be though.

    I should have been more clear. I meant it was marked deleted in the filesystem but not yet overwritten. Since I wasn't writing anything to the array I'm assuming that nothing on any of the data disks would have been overwritten and all I am dealing with is files marked deleted in the filesystem that remain on the drive Or perhaps something corrupted because I killed the power as it was trying to mark a file deleted but didn't complete the operation as you pointed out.

    Everything I care about data wise is backed up onto another device so I'm just trying to minimize my time repairing the damage. 

     

    Thanks again for all the advice in this thread. It had been very helpful so far. Time to research UFS explorer. 😁

  21. 1 hour ago, trurl said:

    You would need something that could work with whatever filesystem was on the disks (ReiserFS, XFS, btrfs).

    I have a mixture of ReiserFS and XFS. Any drive I added after XFS was introduced was formatted as XFS. I recall reading something in a post on the forum indicating this was a good way to proceed.