Whiskeyjack

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Whiskeyjack's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks! After I replied I was looking through some different resources and I think I found my answer. Basically having the main share set to Cache: Yes, mover will move the media to the array on its schedule, which is the piece that I was missing.
  2. Did you ever figure this out? I'm in the same situation; my media lives on the array and my downloads are on cache.
  3. Yep, I tried a couple times just to make sure. I'm an exclusive linux user as well, so I'm confident I can rule that out. What I'm not confident about is how the shares are set up, but nothing *looks* wrong to me.
  4. I thought I was getting a good grasp of how this all works, but now I'm stumped. My assumption was that everything in /mnt/user/appdata is actually on /mnt/cache_appdata (my cache pool, configuration below). However, when I was making a backup, in this case my Jellyfin appdata: cp -rp /mnt/user/appdata/jellyfin /mnt/user/appdata/jellyfin_1.7 My below 24G "rootfs" continually filled up until the unRAID WebUI died. Obviously I have something configured improperly but I'm not sure where to even begin looking. I have my appdata share as such: And the cache disk: unRAID: Filesystem Size Used Avail Use% Mounted on rootfs 24G 13G 11G 56% / devtmpfs 24G 0 24G 0% /dev tmpfs 24G 1.2G 23G 6% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 11M 118M 8% /var/log /dev/sda1 29G 483M 29G 2% /boot overlay 24G 13G 11G 56% /lib/modules overlay 24G 13G 11G 56% /lib/firmware /dev/md1 7.3T 6.6T 698G 91% /mnt/disk1 /dev/md2 7.3T 6.6T 705G 91% /mnt/disk2 /dev/md3 7.3T 6.7T 603G 92% /mnt/disk3 /dev/md4 7.3T 6.6T 700G 91% /mnt/disk4 /dev/md5 7.3T 6.6T 699G 91% /mnt/disk5 /dev/md6 7.3T 6.5T 845G 89% /mnt/disk6 /dev/md7 7.3T 6.6T 750G 90% /mnt/disk7 /dev/md8 11T 9.9T 1.1T 91% /mnt/disk8 /dev/sdf1 466G 81G 386G 18% /mnt/cache_appdata /dev/sdd1 466G 292G 174G 63% /mnt/cache_downloads shfs 62T 56T 6.0T 91% /mnt/user0 shfs 62T 56T 6.0T 91% /mnt/user /dev/loop3 1.0G 4.0M 905M 1% /etc/libvirt /dev/loop2 40G 15G 23G 40% /var/lib/docker
  5. Just found this post after making my own. I'm also having the same problem..
  6. Hey guys, I'm trying to track down a problem where my server crashes and has unreadable scrolling text forcing a hard reboot. I followed the steps here: Attached are the settings I'm using (for syslog and share). As you can see in the Share screenshot, there are no files on disk (nothing for syslog in /mnt/cache) I assume I'm making a simple mistake but I can't find it. Any tips?
  7. Change your repository to: linuxserver/sonarr:preview
  8. Oh, I see! I opened the R510 to take some pictures and I see that the PSU runs to a small board that then runs to the mobo and backplane. Perfect, thank you!
  9. Thanks for the reply! If I'm understanding correctly, if I were to go with the SAS expander route, I would: 1. Put an expander in each server (like these) 2. Connect the two SAS expanders 3. Connect the existing HBA from the R720 into the expander 4. Connect the two backplane SAS connectors from the R720 to the expander 5. Connect the two backplane SAS connectors from the R510 to the expander ...and I'm good to go? As for your last suggestion, what do you mean by unplugging the power from the mainboard? I'm assuming you mean pull power from the motherboard on the R510, but how would the drives receive power?
  10. Hey guys, I have an R720xd running unRAID currently. I also have an R510 that's otherwise unused, but I'm wondering if there's some way to connect them so that I can use the extra drive bays from the R510 to add to the array on the R710. If that's a stupid question, please let me know. Just hoping to make the most of the hardware I have rather than pay for another solution.
  11. I unfortunately can't get this to work at all, but I hope someone else can. I can access my dockers through my dns:port, as I could before, but not the unraid GUI unfortunately. Port 51820 gives me "this site can't be reached". I've tried all the steps repeatedly, but no joy. Good luck to everyone else!
  12. Hi, I just installed this and pointed unmanic at my library, but nothing at all happens. Is there, like, a start button that I'm missing? In the unmanic settings I have: /library/movies -> /mnt/user/plex/movies /library/tv -> /mnt/user/plex/series /tmp/unmanic -> /tmp/unmanic In unmanic settings: /library/movies /tmp/unmanic Run Scan on Start is selected EDIT: For some reason, about an hour later, it started running.