leodavinci

Members
  • Posts

    20
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

leodavinci's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Well, look at that. Thanks a ton. I don't think i would have ever found that. The container started, now I just hope I can get the stupid game to connect.
  2. So i did all of this and i get an error message that says " /usr/bin/docker: Error response from daemon: Requested CPUs are not available - requested 2,13,18,29, available: 0-1". From my admittedly brief research it looks like a setting that is done when the docker is created. Any suggestions on how I can get around this or fix it?
  3. I am getting pending sector warnings, but I haven't gotten any warnings for that drive. I got warnings for a removable drive i am using as a rotating off site backup for all the important stuff. It looks like this: Event: unRAID device sdb SMART health [198] Subject: Warning [TOWER] - offline uncorrectable is 6424 Description: ST4000LM016-1N2170_W801PWBP (sdb) Importance: warning That looks bad, I am going to check it out and maybe return it for a warranty if i can. I got one for an array drive that says: Event: unRAID Disk 3 SMART health [198] Subject: Warning [TOWER] - offline uncorrectable is 2 Description: WDC_WD20EARS-00S8B1_WD-WCAVY3778723 (sde) Importance: warning I am doing an extended smart test on it. Nothing about that cache drive though. I guess it just failed out of the blue. That sucks.
  4. Well, thats irritating. I guess i assumed that since the array health was ok, that everything was chugging along fine. So, how do I enable SMART warnings? I have all of the notifications under "Notification Settings" set to Browser and Email, I am getting the array health emails. Under "Disk Settings->Global SMART Settings" I have the following. What should I change to get these notifications? Default SMART notification value: Raw Default SMART notification tolerance level: Absolute Default SMART controller type: Automatic checked - 5 Reallocated sectors count checked - 187Reported uncorrectable errors unchecked - 188Command time-out checked - 197Current pending sector count checked - 198Uncorrectable sector count
  5. I do have notifications enabled for once a week and the last one was July 10. Below is the text. Is this something that would happen all of a sudden? Is there something that would have caused that? If it can be OK on one notification and be failing that quickly should i increase my notification frequency? Event: unRAID Status Subject: Notice [TOWER] - array health report [PASS] Description: Array has 10 disks (including parity & cache) Importance: normal Parity - ST4000DM000-1F2168_S300HCGM (sdj) - standby [OK] Disk 1 - WDC_WD20EZRX-19D8PB0_WD-WMC4M1000956 (sdd) - standby [OK] Disk 2 - WDC_WD20EZRX-19D8PB0_WD-WCC4M0355112 (sde) - standby [OK] Disk 3 - WDC_WD20EARS-00S8B1_WD-WCAVY3778723 (sdf) - active 30 C [OK] Disk 4 - Hitachi_HDS722020ALA330_JK1101B9H9UY9T (sdc) - standby [OK] Disk 5 - ST4000DM000-1F2168_Z30266QR (sdk) - active 25 C [OK] Disk 6 - ST4000DM000-1F2168_Z3035YBR (sdi) - standby [OK] Disk 7 - ST4000DM000-1F2168_S300NL63 (sdm) - standby [OK] Disk 8 - ST2000DM001-9YN164_W1E1J0V1 (sdg) - standby [OK] Cache - WDC_WD5000AAKS-00TMA0_WD-WCAPW0936270 (sdh) - active 31 C [OK] Parity is valid Last checked on Mon 03 Jul 2017 03:27:28 PM EDT (7 days ago), finding 0 errors. Duration: 13 hours, 27 minutes, 27 seconds. Average speed: 82.6 MB/s
  6. Ok, thanks for the response. Can you tell me what that means and how you determined that? Is there any way to copy the data off? I have a spare waiting to go just for such an eventuality, but I really dont want to have to redo all of my dockers.
  7. Hi all, I've been using unRaid for a few years now with great success. I am currently on unRaid 6.2.4. Last night I was trying to download a video and I noticed my Dockers were all stopped. I thought that was suspicious so I did a reboot and carried on with my evening. This afternoon my wife said she couldnt get Kodi to work on our Raspberry Pi and I checked and the cache drive is listed as Unassigned and the only option I had was a reformat. Unfortunately I did a reboot again before I realized I had a real problem. So I got the diagnostics. Now when I look at the main screen it shows the disk as size 0 (zero) and the button says "Insert" and is greyed out. Now i reboot again and the size and format option is back. I tried the cache recovery options but anything I try to do with /dev/sdg1 says it doesn't exist. The only other thing I can find was on google from a post back in 2009 that caused the same problem when the cache disk got full. This is definitely possible as i pretty much just let this thing run unattended and something could have gotten out of hand. Most everything important on the cache drive was backed up but I would rather do a recovery if possible as I don't remember how to set everything back up and will have to spend a few nights going through that again. Like I said, its been a couple of years. Any help you can provide would be appreciated. tower-diagnostics-20170712-1952.zip
  8. Ok, so in my smb-extra.conf file i have the following: [tc] path = /mnt/tc valid users = user1 user2 write list = user1 user2 browsable = yes guest ok = no User1 can go into the share, edit files, create files, etc. However user2 cannot. She can see the files but not edit or create. I tried changing the order of the users after "valid user" and "write list". That didnt work. I tried eliminating "write list" altogether. That just made both users read only. I tried adding "writeable = yes". user2 still is read only. I added a third user "user3" to the list and that user had the same read only problem. Finally, i think I solved this. I had to put the "force user", "create mask", and "directory mask" back in. I think what this does is forces any user who connects to the share to be translated to "root", then anything they do within that share is as the "root" user. Now i just have to try and use this process to encrypt my offside backups O_o here is my final smb-extra.conf file: [tc] path = /mnt/tc writeable = yes valid users = user1 user2 force user = root create mask = 0766 directory mask = 0766 browsable = yes guest ok = no
  9. Well, originally I had what he listed in the original post: [tc] path = /mnt/tc valid users = [uSER] write list = [uSER] force user = root create mask = 0711 directory mask = 0711 browsable = no guest ok = no That didn't work. So I did some googling and some RTFMnoob and eventually I came up with: [tc] path = /mnt/tc valid users =[my users here] write list = [my users here] browsable = yes guest ok = no That seems to work in my initial tests with my user account. I am going to experiment with other users and see if that is what I want. Which is to say, other people on the network cannot see the files, the desired users can see and read/write the files.
  10. First, sorry for reviving a thread that is almost 2 years old. Second, thanks MortenSchmidt for the excellent writeup on getting Truecrypt working. It works great. My problem is that I cannot get the container shared over Samba. I put the code into /boot/config/smb-extra.conf as suggested but it doesnt show up on the network. I tried stopping Samba and then starting Samba, then when that didnt work I tried rebooting unraid. Do you have any suggestions?
  11. Hello again all. So I have been using the scripts above for a couple of weeks now, and I am having a few problems. I am hoping someone here would be able to help me out. When I unmount Carry or Carry2 via the Unassigned Devices UI, I get a phantom folder at /mnt/disks/Carry[2]. I can browse the folders and look at the file names but if I try to do anything to them, such as copy or delete, I get "Input/Output error (5)" I am assuming that is because the disk is not physically mounted, but that the file system is still retaining the mount points. If that makes any sense. Does anybody know why it is doing this? And how I can prevent it? Second issue I am gettting is a chown permissions error as detailed above. I think it has something to do with the tags on the rsync command, but in theory shouldn't a script running as root be able to do anything it wants to the files? Sorry for the questions, I am still a novice when it comes to Linux file permissions.
  12. Ok, so I've finally gotten around to starting this project. basically I have two portable USB HDDs that I rotate out to the remote site. Carry and Carry2. Here is my modified mover script: #!/bin/bash # This is the 'mover' script used for moving files from the cache disk to the # main array. It is typically invoked via cron. # After checking if it's valid for this script to run, we check each of the top-level # directories (shares) on the cache disk. If, and only if, the 'Use Cache' setting for the # share is set to "yes", we use 'find' to process the objects (files and directories) of # that directory, moving them to the array. # The script is set up so that hidden directories (i.e., directory names beginning with a '.' # character) at the topmost level of the cache drive are also not moved. This behavior can be # turned off by uncommenting the following line: # shopt -s dotglob # Files at the top level of the cache disk are never moved to the array. # The 'find' command generates a list of all files and directories on the cache disk. # For each file, if the file is not "in use" by any process (as detected by 'fuser' command), # then the file is copied to the array, and upon success, deleted from the cache disk. # For each directory, if the directory is empty, then the directory is created on the array, # and upon success, deleted from the cache disk. # For each file or directory, we use 'rsync' to copy the file or directory to the array. # We specify the proper options to rsync so that files and directories get copied to the # array while preserving ownership, permissions, access times, and extended attributes (this # is why we use rsync: a simple mv command will not preserve all metadata properly). # If an error occurs in copying (or overwriting) a file from the cache disk to the array, the # file on the array, if present, is deleted and the operation continues on to the next file. # Only run script if cache disk enabled and in use if [ ! -d /mnt/cache -o ! -d /mnt/user0 ]; then exit 0 fi # If a previous invokation of this script is already running, exit if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "mover already running" exit 0 fi fi echo $$ >/var/run/mover.pid echo "mover started" cd /mnt/cache shopt -s nullglob for Share in */ ; do if grep -qs 'shareUseCache="yes"' "/boot/config/shares/${Share%/}.cfg" ; then echo "moving \"${Share%/}\"" # Only run script if carry disk enabled and in use if [ ! -d /mnt/disks/Carry ] && [ ! -d /mnt/disks/Carry2 ] then echo "Carry is not here" exit 0 else if [ -d /mnt/disks/Carry ] ; then echo "Carry is here" find "./$Share" -depth \( \( -type f ! -exec fuser -s {} \; \) -o \( -type d -empty \) \) -print \ \( -exec rsync -i -dIWRpEAXogt --numeric-ids --inplace {} /mnt/disks/Carry/ \; -delete \) fi if [ -d /mnt/disks/Carry2 ] ; then echo "Carry2 is here" find "./$Share" -depth \( \( -type f ! -exec fuser -s {} \; \) -o \( -type d -empty \) \) -print \ \( -exec rsync -i -dIWRpEAXogt --numeric-ids --inplace {} /mnt/disks/Carry2/ \; -delete \) fi fi find "./$Share" -depth \( \( -type f ! -exec fuser -s {} \; \) -o \( -type d -empty \) \) -print \ \( -exec rsync -i -dIWRpEAXogt --numeric-ids --inplace {} /mnt/user0/ \; -delete \) -o \( -type f -exec rm -f /mnt/user0/{} \; \) else echo "skipping \"${Share%/}\"" fi done rm /var/run/mover.pid echo "mover finished" This works, however I get a chown permissions when running the script. The exact message is: ./Eric/krusaderEt4029.tmp rsync: chown "/mnt/disks/Carry/." failed: Operation not permitted (1) .d..t.og... ./ rsync: chown "/mnt/disks/Carry/Eric" failed: Operation not permitted (1) .d..t.og... Eric/ >f+++++++++ Eric/krusaderEt4029.tmp rsync: chown "/mnt/disks/Carry/Eric/krusaderEt4029.tmp" failed: Operation not permitted (1) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0] ./Eric/krusaderEt4029.tmp .d..t...... ./ .d..t...... Eric/ >f+++++++++ Eric/krusaderEt4029.tmp I dont know why I am gettting that error. I think it is something about not being able to change the user of the files as it copies them? I also get a similar error using the daily backup script posted in the Unassigned Devices forum. https://lime-technology.com/forum/index.php?topic=45807.0 I also added this line to my go script: cp -p /boot/config/mover /usr/local/sbin/mover and of course I copied my modified mover script to the flash drive. So far, everything appears to be working in test except for the above error. Can anybody help me resolve that? I will make some test changes to the array and report back in the morning how things are working. Does anybody have any suggestions to make this cleaner? Thanks for the help unevent. You gave me just enough that I was able to get things moving. (edited to fix a bug in the mover script for anybody else that wants to use this)
  13. Hey all. Great work on this docker. I've been using it for quite a while with two raspberry pi 2s. Keeping everything in sync and updated. Everything was working fine and I decided to do another pi project. Since then the pi3 had come out and I decided to upgrade my HTPCs to pi3s and repurpose the pi2s. I use OSMC on the pis and their release for the pi3s is v16. So I changed the VERSION variable in the docker to 16, (it wasn't set before) and installed the pis and everything almost just worked. My problem is some of my media is missing from the library. I.e. I have all episodes of a show. They showed up in the TV Shows category on Isengard, but in Jarvis, some of the episodes are missing. I can browse to them in files. The files are there, just not in the library. I tried library update but it didn't do anything. Any suggestions?
  14. Thats actually a pretty good idea and would cover 99% of my use case. How would I get it to then copy off of the external and onto the off site system? I guess I could manually copy the external to the cache drive.
  15. Like the topic says, I want to clone my existing unraid box, then move it offsite. Then whenever my unraid changes have it duplicate the change somehow to the offsite box. The offsite box will not have network access. My incremental changes will be at most a couple hundred gigabytes. I'm thinking plug an external hdd into the unraid box, and have any changes made to the server copied to the external hdd. Then once a week or so, I take the external hdd to the remote location and that box then copies the changes over. Does anybody know a way of doing this without my having to kludge together a bunch of scripts and cron jobs? I've been searching the forums with a little bit of googling thrown in, but it seems like most backup solutions are based around network access. Thanks.