• Posts

  • Joined

  • Last visited

Everything posted by Abzstrak

  1. for some reason I can't download the zip to look.... is it an Intel SSD? once Intels reach their max data written, they go read only. Must replace it to fix if that's the case.
  2. anyone else having trouble with the syncthing docker and updating? Mine keeps telling me there is an update, but there is not... I can update and then click on check for updates, it thinks there is another one... My version is 1.2.2 which i believe is up to date. My other dockers are not doing this, just syncthing.
  3. this works for the SSH keys by adding to the /boot/config/go file, where foo is the key mkdir ~/.ssh echo 'foo' > ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys && chmod 644 ~/.ssh
  4. I use pfsense, I see it as a core networking device, so I would never virtualize it.
  5. I also use syncthing, it's handy for more than just photos too
  6. I would normally use minicom for such a need. I just checked nerd pack, doesn't seem to be in there... I'd say ask the maintainer to add it.... Makes a lot more sense than a VM.
  7. yeah, I can install some others, I guess the ones I tried before aren't using SSL... the ones using SSL are all erroring out, so probably not something to do with this plugin.
  8. yeah, it's right... still getting ssl errors. I can install other apps without issue.
  9. Just tried to install, but it's failing ssl verification..... plugin: installing: https://raw.githubusercontent.com/Squidly271/ca.turbo/master/plugins/ca.turbo.plg plugin: downloading https://raw.githubusercontent.com/Squidly271/ca.turbo/master/plugins/ca.turbo.plg plugin: downloading: https://raw.githubusercontent.com/Squidly271/ca.turbo/master/plugins/ca.turbo.plg ... failed (SSL verification failure) plugin: wget: https://raw.githubusercontent.com/Squidly271/ca.turbo/master/plugins/ca.turbo.plg download failure (SSL verification failure)
  10. what do you mean they disappeared? like showmount doesn't show them? nfs server crashed? can you just restart nfs? Are they all in one folder and your just doing a "rm *" or something?
  11. its not the VPN service, its Netflix (and friends) actively blocking them... Complain, loudly, to Netflix.
  12. so why not just schedule the script to remount ? Why reboot the whole system just to mount a filesystem? You could schedule it to run every minute (or hour or every 15 minutes or whatever), and if not currently mounted, then mount it.... makes more sense to me anyway.
  13. new drives should mean new cables, right? Always buy new cables, they are cheap and not worth the pain in the ass they cause when they are screwed up. you probably knocked a cable alittle loose, I'd finish the preclear and add it back in.
  14. Probably a sata drive, enable cache with hdparm. To make it persistent, edit your go file.
  15. Lol, awesome... Totally lucky guess as I have one Mac and everything else is linux, no windows. Glad it seems to be working. Just an fyi, I do understand that enabling that can have some issues for windows client, I've never paid any attention to them as they don't affect me, but something to watch out for or research.
  16. I have the same problem, I had to add to to my go file. my usb is sda, the next six are data and parity drives. I put this line in my go file, seems to work well: hdparm -W 1 /dev/sd[b,c,d,e,f,g]
  17. two of my 6 do the same thing, just enable it manually in the go file... no big deal. my boot usb is sda. The next 6 are my data drives. therefore I put this line in the go file hdparm -W 1 /dev/sd[b,c,d,e,f,g]
  18. is it just via samba? just curious if it shows correct or not via nfs/afp/sftp... I'm wondering if its just a samba issue I'm running 6.7.1 and mine appears to be correct via smb shares I think. I don't have Windows to test from, but connecting from Plasma in Fedora 30 it all looks to be correct date and times... and I haven't screwed around with the smb settings at all other than enabling the apple thing (Enhanced macOS interoperability) for my Wife's macbook. Maybe try turning that on?
  19. turn off next cloud for a while, spin down the disks, see what happens. Continue turning things off until something has an affect. Once you figure out what program or docker or whatever it is then you can troubleshoot it... right now your just guessing.
  20. I would, yes. Do your worst, abuse the system and see how big the transcode folder gets, then add 10-15% and go back to using a RAM drive that is at least that large assuming you have enough RAM. don't about the caching, its normal... unused memory is wasted memory.... it's a unix thing. No worries, the system gives it up for other use.
  21. true, but the automatically created one uses mount defaults, including a max size of 1/2 your RAM... which, could be very important. For example, I found that if I am DVRing and watching something I could pull down 22GB of space and I have 32GB of RAM. If I used defaults, then I'd run out of space and my DVRs would get auto cancelled. I mounted manually with a max size of 24GB to avoid this.
  22. yeah, just been lazy... was thinking of doing that as well as shooting out an email/sms message to my phone letting me know its corrupt. its been 10 or 11 days since my last corruption, so... motivation isn't all there either
  23. yeah, I've been keeping hourly backups of my database, makes it easier to restore to where I need to. just script copying it hourly to a different folder, works fine. I just have been running this command hourly (using user scripts addon). Obviously you'll need to create the dbbackups folder. tar zcvf /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/dbbackups/com.plexapp.plugins.library.db-$(date +%A-%H%M).tar.gz /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db
  24. you can use the storage while parity is being created, obviously sans any redundancy until complete. Be aware that you'll probably top out around 50MB/s unless you enable turbo mode, even then you'll have a hard time getting twice that speed on average. I copied over 12TB and it took 3 days over gigabit... keep in mind I never saw gigabit speed except in small bursts. I also didn't know to enable turbo mode until part way through. Cache won't help with this transfer, unless you have 8TB+ in cache Getting used to the way things work will take a bit of time too, I'd suggest playing with a spare machine to get an idea of how things work to speed up the transition.
  25. yeah, kinda not cool in that we cant test anything easily. I just grabbed the binary off my desktop and stuck it in my folder I have been testing db consistency with, seemed fine.