tribble222

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by tribble222

  1. You can fix it by changing the permissions of the es folder inside the TubeArchivist appdata folder.
  2. For anyone from the future encountering the issue: WARNING: Failed to load/generate certificate: save cert: open /config/cert.pem: permission denied I was able to solve it with chmod nobody:users -R /mnt/user/appdata/syncthing and ensuring I had PUID and PGID set to 99 and 100 respectively, in the container's docker config.
  3. Woohoo, that worked! Thanks for fast reply!
  4. I just updated to latest and now my apps tab has disappeared and if I go to /apps it's a blank page... Here's my recent log: Mar 23 18:26:09 DAX emhttpd: req (2): cmd=/plugins/community.applications/scripts/updatePLG.sh&arg1=community.applications.plg&csrf_token=**************** Mar 23 18:26:09 DAX emhttpd: cmd: /usr/local/emhttp/plugins/community.applications/scripts/updatePLG.sh community.applications.plg Mar 23 18:26:10 DAX root: plugin: running: anonymous Mar 23 18:26:10 DAX root: plugin: running: anonymous Mar 23 18:26:10 DAX root: plugin: skipping: /boot/config/plugins/community.applications/community.applications-2018.03.15.txz already exists Mar 23 18:26:10 DAX root: plugin: running: /boot/config/plugins/community.applications/community.applications-2018.03.15.txz Mar 23 18:26:10 DAX root: plugin: running: anonymous I figure I can maybe fix it by rebooting my server - but is there any way to just restart just the apps plugin so I don't have to reboot?
  5. I'm sure this is answered somewhere but I'm probably just not using the right search terms. I had a user share that was set to "use cache disk: yes". I put stuff on the share. I then noticed that it has a triangle next to it because it's not stored redundantly. I went in to the share and changed it to "use cache disk: no" but the triangle is still there a day later. Do I need to move data manually somehow to fix this? Thanks
  6. Watching with interest. I ordered a similar setup to put in my supermicro 24 bay 4u. Got the 9207 as well so hopefully the interrupt thing fixes the slow boot but if not I don't plan on restarting this thing every day anyhow.
  7. Any power consumption numbers? I'm considering the same build but am hoping for a lowish idle watts.
  8. Yeah, continuing my reading I'm coming to the conclusion that gaming vm + Nas isn't really a hands off experience. I guess I'll go for building a low powered unraid w/docks and keep my gaming rig separate. Maybe when my gaming components get old I can cycle it into my Nas and buy new gaming stuff
  9. Actually, maybe I'll just get another 16gb ram and an i7 7700k and call it a day. Looks like the mobo is OK for vt-d afterall
  10. Thanks for the response. I'd want to support max 2 streams of plex transcoding. My motherboard claims to support it (vtd) but I read online e some people are having issues... Makes me think to just buy known reliable hardware and save the headache. Ideally I don't want to futz with this thing. Is there a recent build list faq for what I want out of this machine? My searches have only found some stuff around a year+ old.
  11. Hi, I haven't used unraid for several years now but now I'm looking to combine things into 1 pc. I want plex, sonarr, couch potato, etc. Plus a vm for gaming, plus a vm for normal desktop. Or I could do just 1 vm for gaming+desktop together I suppose. I have z170a gaming m7 motherboard, i5 6600k cpu, 16gb ddr4 3000 ram. Should I use what I have or get something else like xeon? My primary goal is to have this thing "just work " once I get it all set up. Money is not much of an issue, but it's nice to save if my current hardware is sufficient.
  12. Looking to sell my old server. It has been a really reliable workhorse for me. Not sure what it's worth but I'll hear any offers. All of these components were selected to be very energy efficient. unRAID Pro License on Lexar firefly usb Centurion 590 case with three 3-drive inserts to easily change drives and keep the drives cool. Biostar TA780G M2+ motherboard Kingston 1GB (2 x 512MB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) RAM AMD BE2400 Processor-2.3G-Dual Core Super Quiet Sunbeam CR-SW-K8 92mm CPU Cooler Antec Earthwatts 500W P/S SuperMicro MV8 board 1 1TB WD Green 2 750GB WD Green 5 500GB (3 WD, 1 Maxtor, 1 Seagate,) 1 320GB Seagate (Total of 5.32 TB) There is room for 3 more drives the way it is now. Willing to take any offers from someone who would like to buy in person in the San Francisco Bay Area. I think shipping would be cost-prohibitive.
  13. You could create a local backup and then rsync it over. Otherwise there are a bunch of guides here for backing up to NFS, SSHFS, and SMBFS http://wiki.rdiff-backup.org/wiki/index.php/TipsAndTricks
  14. I find rdiff-backup best for my needs. From the website: Compared to rdiff-backup, rsync is faster, so it is often the better choice when pure mirroring is required. Also rdiff-backup does not have a separate server like rsyncd (instead it relies on ssh-based networking and authentication). However, rdiff-backup uses much less memory than rsync on large directories. Second, by itself rsync only mirrors and does not keep incremental information (but see below). Third, rsync may not preserve all the information you want to backup. For instance, if you don't run rsync as root, you will lose all ownership information. Fourth, rdiff-backup has a number of extra features, like extended attribute and ACL suport, detailed file statistics, and SHA1 checksums. I like rdiff-backup because it keeps the backup as a current mirror and then stores all the earlier changes as diffs. A few years back I had a catastrophic failure and was able to just mount my backup drive and it ran exactly as the failed drive did. But if I end up deleting something, or needing an earlier version of a file, I can just restore it from the diff.
  15. I've been using rdiff-backup for a while but recently had to start over from scratch with unraid while troubleshooting a problem with my server (turned out a sata cable went bad after a couple years). Anyway, I like unMENU and figured this time around I might as well just go ahead and create a package that everyone can use to install rdiff-backup just in case anyone else wants to use it. This is my first time creating an unMENU package. I tested it and it seems to work fine for me but YMMV. I'm running 5b14 if that matters. rdiff-backup requires python (which is already in unMENU packages, you just have to install it) and librsync (not already included). I therefore have 2 packages here that you need: librsync and rdiff-backup. I separated them because I wasn't sure how well it would work to have them both install in the same script. You'll probably also want to install ssh so you can remote backup. Anyway, long story short, make sure to install python and then install librsync and rdiff-backup from below. Edit 3/17/14: updated the rdiff-backup-unmenu-package.conf with new url rdiff-backup-unmenu-package.conf librsync-unmenu-package.conf
  16. I'm having a problem with user share permissions. I upgraded from 4.7. Unless I set my shares to "public" they seem to always be shared as "read-only", even if I give a particular user read/write permissions. I'm accessing the shares through a box running Ubuntu. Everything worked fine in 4.7 with this same setup. I did run the new permissions utils script but that didn't help. Any help appreciated, thanks.
  17. Okay, thanks for the reply. I'll keep an eye out for the 5.x release.
  18. I'm sure this is buried somewhere in the forum here but despite my searching I could not find it. I've added rtorrent and it's working great. The only problem is that if a torrent is downloading and I press the "stop array" button then all the drives show up as "unformatted" except the drive I'm downloading to and parity, both of which remain mounted. Obviously, this is because rtorrent is still writing to open files on the disk. Is there a script or something I can edit so that I can stop rtorrent before the drives are unmounted? Thanks
  19. Gotchya. That I-RAM drive is pretty interesting - hadn't heard of it before. What is the advantage of installing the packages every time and figuring out unionfs over having a static filesystem that you rsync to the ram drive on boot and back to the disk on shutdown? If you're syncing the entire thing then you don't have to write individual sync scripts.
  20. Forgive me for a stupid question, and it's probably already discussed elsewhere that I couldn't find, but why would you have to reinstall packages every time in the first place? Once you're running unRAID on a full slackware distro can't you just install rtorrent and leave it be? Is there something difficult about writing to the array from internal programs?
  21. So that panic doesn't occur when you use the web interface to stop and start the array?
  22. Thanks for this bubbaQ. I've spent a few years with linux but it would have taken me forever to figure this out if not for your guide. I've been following it today and just finished getting my new kernel working properly. I had a little trouble getting dma enabled, but once I added ati-ipx to the kernel it worked. Oh, also, when testing, I had to be sure to compile the new kernel from 2.6.24.5. If I tried to compile it from my new unraid kernel I would get an error. The only things I did differently were what brain:proxy mentioned in the previous thread, namely, doing "make modules_install" instead of "make modules install" and compiling those other two parts into the kernel...but I'm not sure if those changes were even really necessary.