Jump to content

tcharron

Members
  • Content Count

    86
  • Joined

  • Last visited

Community Reputation

0 Neutral

About tcharron

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I got Serviio 2.0 working on my unraid (with my Sony Bravia TV). I based all of this on the riftbit serviio container. That has been updated 9 months ago (not sure if it will be supported beyond that, but it is 2.0). That container is described here (you don't need anything from this page -- it's just an fyi). https://registry.hub.docker.com/r/riftbit/serviio 1) Create a directory on your server that can be used to store library data. I chose to use /mnt/myapps/Serviio-riftbit. You should create a directory on your server before continuing. 2) Run the following commands. Replace the reference to the directory from step 1 with whatever directory you created. For each "-v" section below, map each of your media directories (I hope this part is self evident). The format is "-v [source directory]:[docker directory]:rw". You can probably use :ro at the end, but I haven't tried that to see if it causes any errors. I use RW since serviio can be configured to download metadata and subtitles etc docker pull riftbit/serviio docker run --name serviio_r -d --network host -p 23423:23423/tcp -p 23424:23424/tcp -p 8895:8895/tcp -p 1900:1900/udp -v /etc/localtime:/etc/localtime:ro -v /mnt/myapps/Serviio-riftbit/library:/opt/serviio/library:rw -v /mnt/user/unraid/Videos/Movies:/media/Movies:rw -v /mnt/user/unraid/Videos/Movies_Viewed:/media/Movies_Viewed:rw -v /mnt/user/unraid/Videos/Kids:/media/Kids:rw -v /mnt/user/unraid/videos_tv:/media/videos_tv:ro riftbit/serviio 3) Go to the docker page within your server. You will see a new "serviio_r" docker there. Disable any other serviio instances, or anything that uses conflicting ports (8895, 23424,23423) stop and restart the "serviio_r" docker 4) Navigate your browser to "[your unraid server]:23423/console 5) Configure serviio. In particular, add each of the media paths to the shared folders under the "Library" section. Note that the docker can not be configured or updated within the Unraid docker page. It can only be stopped/started/deleted (or the console accessed) from that page. If you want to make a change (for example, adding a new media directory), edit your command from step 2 (the first line is not needed again), delete the "serviio_r" docker (via the unraid Docker page), edit your script from step 2, and run it again. My experience is that Serviio configuration settings survive when I do this (although I have only tried it once or twice). I hope that this helps others! I'm not really intending to support this, but will try to do my best to help if I can. If I had more time and energy, the next logical step is to make this all configurable from within Unraid, without requiring the script. Tim
  2. If you log in to your account at https://www.crashplanpro.com/app/#/console/ , what does it say? I wonder if somehow you have ended up with recent backups being somehow identified as a new device.
  3. I am not trying to back up a network share. I am trying to back up local drives TO a network share. So effectively there is no way to use CrashPlan to back up local machines to my unraid server, even if I pay for CrashPlan.
  4. I just installed CrashPlan Small Business on a windows machine, and this docker on my unraid box. For the life of me, I can't see how to establish a backup of the windows box to the unraid box. The only destination options I can see are "CrashPlan PRO Online" and "Add Local Destination". When I use the local destination option, I get an error if I select a network drive. I know that the peer to peer support has disappeared, but can't I at least back up to a network destination now? By the way, The OS in the logs reports as: /mnt/user/appdata/CrashPlanPRO/log/app.log:OS = Linux (4.15.0-60-generic, amd64) grep: /mnt/user/appdata/CrashPlanPRO/log/service.log: No such file or directory
  5. I just had a cache drive fail. My former configuration was 3 240G SSD drives. That showed 360G of usable space After the failure of one drive, it still showed 360G usable. I figured that was ok, as the data was all there (but no longer protected). I replaced the bad drive with a 512G drive. I expected this to increase the capacity from 360G to 480G. This expectation was confirmed with the calculator at http://carfax.org.uk/btrfs-usage/. That calculator told me that 32G would go unused. This is presumably since everything written to the 2 240G drives would be duplicated on the 512G, with using 480G (and hence 32G left over). In fact, unraid reports that I now have an available cache pool of 496G. This doesn't make sense to me. What am I missing?
  6. Yeah... as soon as I saw it I knew what was going on. I think that the entire idea of having things auto-installed from my flash drive via auto_install and a line in my go file was probably something that I added 5 years ago or so. It worked well for a very long time, and then was was long forgotten but survived the various upgrades along the way. My point in sharing it here was just to help anyone else figure it out, as I found a few threads with similar problems. The generallly accepted solution was to do a clean install to a new USB key -- neither of which is required with the above knowledge.
  7. Old topic, but it just bit me. For me, the problem was gcc-4.8.2-x86_64-1.txz.auto_install, which was located in my /boot/packages directory. A line in my go file was installing it, and it was the file preventing my machine from booting. I was able to verify this by just trying to install it manually after my system finally booted properly... Verifying package gcc-4.8.2-x86_64-1.txz. Installing package gcc-4.8.2-x86_64-1.txz: PACKAGE DESCRIPTION: # gcc (Base GCC package with C support) # # GCC is the GNU Compiler Collection. # # This package contains those parts of the compiler collection needed to # compile C code. Other packages add C++, Fortran, Objective-C, and # Java support to the compiler core. # Executing install script for gcc-4.8.2-x86_64-1.txz. Package gcc-4.8.2-x86_64-1.txz installed. Verifying package glibc-2.17-x86_64-7.txz. Installing package glibc-2.17-x86_64-7.txz: PACKAGE DESCRIPTION: # glibc (GNU C libraries) # # This package contains the GNU C libraries and header files. The GNU # C library was written originally by Roland McGrath, and is currently # maintained by Ulrich Drepper. Some parts of the library were # contributed or worked on by other people. # # You'll need this package to compile programs. # Executing install script for glibc-2.17-x86_64-7.txz. cp: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by cp) /sbin/ldconfig: Cannot lstat ld-2.17.so: No such file or directory /bin/bash: line 56: /usr/bin/basename: No such file or directory /bin/bash: line 56: /usr/bin/rm: No such file or directory /bin/bash: line 57: /usr/bin/basename: No such file or directory /bin/bash: line 57: /usr/bin/cp: No such file or directory /bin/bash: line 58: /usr/bin/basename: No such file or directory /bin/bash: line 59: /usr/bin/rm: No such file or directory ...
  8. fyi, this approach worked -- my server was rebuilt from parity and I didn't lose any data. The only strange/worrisome moment was the two array shutdowns that I had to do -- each of them took about 10 minutes to complete. I think that the delay was related to some kind of network share issue related to the temporary mount I had created on the problem server which was allowing me to access data on my other unRaid server. For some reason, that mount became unusable when the array was taken offline. the web gui became unresponsive, and all filesystem commands were very slow (including lsof, mount, df). In any event, I'm a happy camper this morning. Oh - A scan of the USB key did not find any errors, so I am not sure what caused the 0 byte super.dat file. In any event, I made sure to take a copy of the super.dat file now that it's all working well! Thanks!
  9. Well, I can't guarantee that I don't have some device on my network which will send a file to unraid when it's up. If that happens, then unraid could try to write to my stale/bad drive. I think I'll do a partial clear to ensure that the drive can't be written to.
  10. I don't see how unraid will know which of my data disks is the 'bad' one... since it is stale but has a valid filesystem on it. The moment I assign all drives including parity and bring the array up, won't it somehow write to the parity drive (as soon as any process anywhere writes to the array), and therefore destroy the data I need to recover the bad drive? (for clarity, I'm referring to the moment between steps 3 and 4 above). More specifically, if any data gets written to the bad/stale drive, then the parity bits of those corresponding writes will get written to the parity drive... which means I won't be able to restore the drive properly. Maybe I should first start a preclear on the stale drive (and cancel after a minute or two) -- that should ensure that the old/stale filesystem is not recognized by unraid -- which will prevent unraid from trying to write to it, which would destroy the related parity drive data. ??
  11. I don't think I did that. I'm pretty sure that when I momentarily added the drive to the array it was the old data drive #1 in slot #1. I'm in the process of copying that data to my second server. However, the old disk is stale. My goal here is to get what I can off of the drive in case my rebuild from parity has some kind of problem. I ended up using a command line mount statement to mount it as read-only. I am in the process of copying this to my second unraid server. Even though it's stale, it's better than nothing. Once this drive's contents are copied off of it, am I required to preclear it? I really just want to start the rebuild -- preclearing it seems like a waste of time. Well, maybe a preclear will find a problem, but given my issue had been a cable problem I don't see much value in wasting a day or two doing a preclear -- i'd rather start the rebuild from parity. If it finds an error, i won't be any worse off. But if it doesn't find an error, I will have avoided a slow pre-clear.
  12. Thanks for taking the time Johnny! I'm running 6.1.7 so this sounds like it could work. The other thread seems like the exact same problem I had. You said I do know which is the parity drive, but I may have broken things anyway... When I looked at the empty array the first time, I momentarily thought I'd reassign drives. I assigned the first data drive to the correct drive. When it didn't show up with the correct FS/size/used/free data (and I noticed this after about 3 seconds), I immediately changed it back to 'unassigned' and came to the forums. Hopefully that lapse in judgement didn't cause anything to be written to the drive. (?) I don't think that I have a spare drive matching the size of the one that was noted as failed (4TB). What I do have is a second unRaid server that has 10TB of free storage. The failed drive was probably only about 25% used. How can I mount that drive in read-only mode so that I can copy it's contents off the drive (and then use the drive as if it were a new one)? It was formatted as XFS.
  13. Well, I figured I can get the data off the drives (even the one that failed, as I suspect it was just a bad cable). The problem is that the disk failure occurred a while ago (maybe 10 days or so), so the 'bad' drive will have stale data. I guess I'll email LimeTech and see if there's any workarounds. Super.dat is quite the weak point. It was clearly corrupted 10 days ago, but the array went on blindly just waiting for a shutdown/reboot. It would have been nice if it detected the imminent problem somewhere in that 10 days.
  14. I don't think it exists, as I upgraded a drive from 3tb to 4tb about 10 days ago -- the day that the super.dat 0 byte file was created. I also replaced a failed drive. I haven't taken a backup since then. I do have a screen print of the array configuration from just before I did all this.
  15. So, I had a bad disk. I let it run as simulated from the parity for a while. Then, I shut down and brought the system back up. It was at this point that I had an empty array. The /boot/super.dat file is 0 bytes and is dated from almost two weeks ago. WHat I want to do is to use the good disks and the parity drive to rebuild the missing data drive. I know which drive is parity. How do I do this?