wsume99

Members
  • Content Count

    531
  • Joined

  • Last visited

Everything posted by wsume99

  1. I looked around this morning trying to solve the issue. I ran pwmconfig and the array fan can be controlled but there are no devices in sys/class/hwmon/hwmon1/device/. It appears something has changed in the new version and I can't figure it out yet. I reverted back to 6.8.3 and everything works correctly. More research is in my future.
  2. Today I upgraded to the latest unraid release (6.9.2). I also added a password to the root user account. Now the user script that I have controlling the array fan stopped working. I copied the output from the script below when it runs. Any recommendations on where to start? Script location: /tmp/user.scripts/tmpScripts/Fan Speed/script Note that closing this window will abort the execution of this script Disk /dev/sdc current temp is 0 Disk /dev/sdd current temp is 0 Disk /dev/sde current temp is 0 Disk /dev/sdf current temp is 0 Disk /dev/sdg current temp is 0 Disk /dev/sdh current temp i
  3. Thanks for the reply. The script was working just fine all I needed was to get it running in cron. I was clicking Run In Background instead of Apply. Once I used Apply it was loaded into Cron and persistent across a reboot. Everything is working just like I need now. Thank you
  4. I have a working script that I'm now trying to run every 2 minutes via cron. I selected the Custom option from the scheduled drop down menu and enter */2 * * * * as the Custom Cron Schedule value and then click on Run In Background. A pop-up window opens telling me the script is running in the background which I then close. I have modified my script temporarily to write entries into the log file every time it runs. I know the script runs once because I see the output in the log file but it but does not run again. My settings do not remain after reboot or even if I navigate away from the User S
  5. Just upgraded to 6.8.1 and for whatever reason the code in my go file that loaded my custom array fan speed script into cron is now broken. Searching for cron help lead me to User Scripts, which I already had installed but was not using as a way to schedule my fan speed script. I'm trying to run a script on a custom cron schedule to have the script run every 2 minutes. I want this script to run automatically whenever the server is powered up and regardless of array state. A quick search of the forums didn't catch any posts on how to enter a custom cron schedule for a script. Do I select Custom
  6. My original post: And the original reply: Resurrecting this problem. I've been fighting issues with some new hardware and after a lot of work I'm back to my old MB/CPU to try this again. I have tried both remedies (rsync -rltgoDv and rsync -av --no-perms) and neither prevent the errors I outlined above. Any more suggestions?
  7. If I understand what you are suggesting correctly I have already done this. The controller on the motherboard is different than the controller on the pci-e card. I have the same problem when I'm connected to either controller.
  8. I found this post which discusses problems with UASP which affects portable USB drive operations. Is this what you were referring to?
  9. I have not. I was searching last night about USB3 problems in linux and there are a lot of posts on various distros and hardware where users had problems similar to mine. It appears that the kernel is very buggy with USB3 devices. Thanks for the suggestion I'll look into it.
  10. My problem - when I try to read from or write to a portable HDD over USB3 I get random hangups in the transfer. This is happening on both the on-board ports as well as a pcie USB expansion card. Background: I decided to do a hardware refresh on my server. I purchased a Supermicro X10SAE motherboard and an e3-1226v3 off ebay, both items were used. Everything else RAM, PSU, SATA cables, fans, etc were all existing hardware that I had and was not experiencing any issues with. As part of this refresh I also purchased 2 x 8TB HDDs to replace a 2TB and 3TB drive that were already in my a
  11. I am using UD to mount and share a 4TB external drive on my server. I am trying to rsync files from my array to the drive. Here is the command to initiate the transfer: rsync -av /mnt/user/Photography/2005/ /mnt/disks/EasyStore_4/2005/ When the rsync finishes it displays the following message: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1189) [sender=3.1.3] The all are of the following type: rsync: chown "/mnt/disks/EasyStore_4/2005/12-25-2005/.12-25-05(2).JPG.nkTWa8" failed: Operation not permitted (1) I compa
  12. After more searching it looks like this is actually a known issue with the linux module that handles the hfs+ filesystem. Rsync is hanging due to a problem reading the hfs+ filesystem. I guess I just got lucky before when using this drive. Regardless, I'm switching to exFAT formatting on all my portable hard drives now since I have windows, linux and mac machines. Hopefully that will fix the problem.
  13. I'm beginning to think this issue is being caused by something on the server. The files on this drive were all written from a mac laptop. I plugged the HDD into the laptop and the laptop to ethernet and then rsynced the files from a terminal on the mac. I ran overnight without any problems. Of course it is slower copying over the network as compared to copying files onto the array from a device connected directly to the server. However since the HDD can be read by the mac without issues that leads me to believe the issue is on my server. Now if I can just figure out what the proble
  14. I running v6.6.6 and am trying to copy ~600GB of data (DSLR photos) off a portable USB3 HDD onto my array. I'm using a rsync -avPX command via a terminal session to copy files from the drive (UD mounted) into a duplicate folder on my array. I have had the drive plugged into both USB3 and USB2 ports on the MB as well as a USB3 pcie card that I previously used in my old server. The problem persists no matter how the drive is connected. The transfer hangs on random files. If I close the terminal and open another and re-initiate the rsync it will start copying again and then not to long afterwards
  15. I needed more drive slots so I bought a new case. I figured why not just upgrade my MB and CPU while I'm at it. I had a spare stuff lying around to make a second server so that is what I'm in the middle of ATM. I bought a used supermicro X10SAE motherboard and an e3-1226 v3 CPU for $141 total. I am reusing a functional PSU and RAM along with misc fans, etc. I want to stress my new hardware to make sure everything is functional. Based on my research my plan is to complete the following: 1) 24 hr memtest (currently underway) 2) 4 hr prime CPU stress 3) I have 2 x 8TB HDDs t
  16. Wait a second, I just read thru the logfile and it's having trouble connecting to the news server. I recently got a new CC and I bet that is the problem. 🤦‍♂️
  17. UD Mount: First half of paths: Second half of paths:
  18. I'm at a loss with NZBGet and need some guidance. I've done quite a bit of searching/reading but can't seem to figure out what my problem is. I found a few things along the way but none of them fixed my issue. Background: I recently deleted my flash drive by accident and had to re-setup my server. I'm now running 6.6.5. I have all my docker containers installed on a non-array drive. I used to mount this drive via some code in my go script but as part of the rebuild process I switched to the UD plugin. I am reusing the container I had previously so the NZBGet config file has not cha
  19. I already have sonarr up and running, never even considered using it for this purpose. Thanks for the suggestion.
  20. I'm looking for advice from the community on the automation of home video importing and organization. I've been reading quite a bit and while I am certainly a bit more informed I'm still uncertain of the best way to proceed. All of my home videos are currently stored on my server. I have the files separated into folders corresponding to the date the video was taken. Some folders have a number of files in them and others are just a single file. I'm pretty happy with the arrangement. My only problem is that copying files over to my server from the SD cards is a PITA and I'm looking t
  21. I knew someone would say "just use the UD plugin" and I looked at it last night but was too tired to figure a new plugin out. I have had my system configured this way for at least 5 years without any problems until now and the problem is that I deleted the smb-extra.conf file. I figured I could either reconfigure all my docker containers or simply edit the smb-extra.conf file so that the mount is shared. At the time fixing my smb-extra.conf seemed simpler. After sleeping on it I re-installed UD, created a new share and switched over all my docker mappings which was actually not bad at all. Pro
  22. I'm in the middle of rebuilding my server (v6.6.5) because I accidentally erased my flash drive (🙄) and have run into a snag with an unassigned drive that I have apps/dockers installed on. I can see the network share but windows is asking me for a username/password if I attempt to access it. I can open all my other shares. I have the following entries in my go file: mkdir -p /mnt/disk/sdf1 mount -t reiserfs -o noatime,nodiratime /dev/sdf1 /mnt/disk/sdf1 And I have added this to the smb-extra.conf file: [sdf1] path = /mnt/disk/sdf1 read only = no valid us
  23. Did a little reading on UFS Explorer. There are several versions. It looks like the standard version would meet my needs. It can restore accidentally deleted files and it works with XFS and ReiserFS as well as FAT32 🤔🤔. So an interesting thought popped into my head. Why couldn't I also use the software to recover the files that were deleted from my flash drive? Seems reasonable to me. It would probably save me several hours of setup time getting the system back up and running, shares setup and all my apps installed and reconfigured. Any reason not to try that?
  24. I should have been more clear. I meant it was marked deleted in the filesystem but not yet overwritten. Since I wasn't writing anything to the array I'm assuming that nothing on any of the data disks would have been overwritten and all I am dealing with is files marked deleted in the filesystem that remain on the drive Or perhaps something corrupted because I killed the power as it was trying to mark a file deleted but didn't complete the operation as you pointed out. Everything I care about data wise is backed up onto another device so I'm just trying to minimize my time repairing the da
  25. I have a mixture of ReiserFS and XFS. Any drive I added after XFS was introduced was formatted as XFS. I recall reading something in a post on the forum indicating this was a good way to proceed.