ccruzen

Members
  • Posts

    127
  • Joined

  • Last visited

Everything posted by ccruzen

  1. I updated all plugins and docker containers yesterday, didn't have any updates available today. I didn't reboot, however, so I'm guessing it's fixed now. Will keep an eye on it and report back if any issues.
  2. I'm getting the same error on both of my Unraid servers. Looks like the unraid api is causing the issue. It's the same for both servers. Showing /var/log/unraid-api as taking up all the space. root@Cairon:/var/log/unraid-api# du -h /var/log 0 /var/log/pwfail 113M /var/log/unraid-api 0 /var/log/preclear 0 /var/log/swtpm/libvirt/qemu 0 /var/log/swtpm/libvirt 0 /var/log/swtpm 0 /var/log/samba/cores/rpcd_winreg 0 /var/log/samba/cores/rpcd_classic 0 /var/log/samba/cores/rpcd_lsad 0 /var/log/samba/cores/samba-dcerpcd 0 /var/log/samba/cores/winbindd 0 /var/log/samba/cores/nmbd 0 /var/log/samba/cores/smbd 0 /var/log/samba/cores 628K /var/log/samba 0 /var/log/plugins 0 /var/log/pkgtools/removed_uninstall_scripts 4.0K /var/log/pkgtools/removed_scripts 8.0K /var/log/pkgtools/removed_packages 12K /var/log/pkgtools 4.0K /var/log/nginx 0 /var/log/nfsd 16K /var/log/libvirt/qemu 0 /var/log/libvirt/ch 16K /var/log/libvirt 114M /var/log In looking in the unraid-api directory, here's what I'm seeing: root@Cairon:/var/log/unraid-api# ls -l --block-size=M total 113M -rw------- 1 root root 0M Jan 10 05:12 stderr.log -rw------- 1 root root 1M Jan 13 13:34 stdout.log -rw------- 1 root root 113M Jan 13 13:34 stdout.log.1 The "Array was updated, publishing event" section is constantly repeating: ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,> ^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,> ^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mLoading state file for shares^[[39m ^[[90m{"logger":"emhttp"}^[[39m ^[[34mDEBUG^[[39m: ^[[36mArray was updated, publishing event^[[39m ^[[90m{"event":{"state":"STARTED","capacity":{"kilobytes":{"free":"19367601580","used":"13524285320","total":"32891886899"},"disks":{"free":"21","used":"9","total":"30"}},"boot":{"id":"Flash_Drive","device":"sda","comment":"Unraid OS boot device","exportable":true,"fsFree":30971576,>^[[34mDEBUG^[[39m: ^[[36mLoading state file for disks^[[39m ^[[90m{"logger":"emhttp"}^[[39m Not sure what the issue is, will update the plugins and reboot.
  3. That's exactly why I'm going 11th Gen. I don't want to pay the ridiculous GPU prices if I don't have to but really want to get in on the hardware transcoding train. I'm hoping the issues get figured out soon though.
  4. @DreeasLooks like a great build and very similar to what I'm planning. Glad to hear it is working out for you. Is this your first foray into Unraid? Just looked and I purchased my first Pro license in 2011. Been using it ever since with great results.
  5. So, after hemming and hawing over things for a bit, this is what I'm thinking. Anyone have any gotchas I should be aware of? And yes, it may be overkill but I'm looking to future proof for a while. CPU: Intel i7 11700K CPU Cooler: Noctua NH-U9S Mobo: Asus PRIME Z590-A RAM: G.Skill Ripjaws V 64GB (2 x 32GB) DDR4-3600 CL18 Will re-use the following: Case: Supermicro SuperChassis SC846TQ Backplane: Supermicro BPN-SAS2-846EL1 HBA: Dell PERC H310 10GB NIC: Mellanox ConnectX-2
  6. I'm currently running a couple of Unraid servers on old Xeon hardware. An E3 1280 with 32GB RAM and E3 1240 with 16GB RAM and looking to consolidate and improve my transcoding capability. I run a Plex server for a lot of my family/friends and don't currently use a GPU to transcode, so I'm keeping 4K files for in-home streaming and 1080p files for remote. Looking for something that could transcode up to 4 simultaneous 4K video files. With GPU prices, that's leading me to think iGPU but I haven't built a non-Ryzen computer for years. I'm also running a bunch of dockers and a couple of VMs (Ubuntu server for on-line poker hosting, a couple Windows sandboxes, and other Linux distros just to test out/play with) so thinking something with a few more cores would be helpful. My main server is in a Supermicro 846 and I'll be repurposing everything but the CPU, MB, and RAM. Anyone have any advice or anything I'm not thinking of I need to consider? I'm not looking to spend a ton but willing if needed. Thank you to anyone taking the time to look at this.
  7. Just checked and installed the update. Back up and working. Thank you!
  8. Just some thoughts from my experiences. For music, if you're already running Plex and have PlexPass, you can add your music share to Plex and stream to PlexAmp (which is awesome). Or, if you're not resource contrained, run an Airsonic docker for music and Booksonic for books. For books I heartily recommend Booksonic. The Booksonic Android app is pretty good but what I usually do is use the Booksonic app to cache the next few books I want to listen to, then point Smart Audiobook Player's library to the Booksonic download folder and play them through Smart Audiobook Player (because the speed adjustment just works better). I've got them all running behind SWAG (reverse proxy). Hope that helps.
  9. No worries here. I think the world needs as many surprised monkeys in sweaters as we can give it.
  10. I was right there with you guys (and following this thread) and ended up getting an HP ProDesk 600 G1 Core i5-4570 3.2ghz/8GB/500GB for right at $100 to my door. I've got it up and running Proxmox with my pfSense in a VM on a shelf in my rack and it's running great. Threw in a 4 port Intel NIC I had on-hand and haven't had any issues. Only thing I'm not super happy about is I can't figure out how to enable VT-d for full PCI passthrough support. Either way, I'm happy and wish you the best of luck figuring out what to go with.
  11. Thanks for updating this. I've followed your advice and it's working great.
  12. Thank you! Worked great and back to working!
  13. Same here, my reverse proxy isn't working anymore after the latest update. How did you go about downgrading the container if I may ask?
  14. I'm sure this is an easy fix but I'm not super proficient in linux. I'm having a permissions issue I can't seem to figure out. Any files I process with Beets I can't modify from my Windows machine. I'm getting a "You need permission to perform this action" error. I've tried changing the docker PUID and PGID to the values for nobody in the docker but that didn't work. Any ideas?
  15. You're right, thanks for that, not sure what I was thinking leaving the ports forwarded. And you nailed it! Thank you, that works perfectly now. I was thinking since Booksonic was using port 4040 it needed to be different for Airsonic, but in the proxy it must not matter. Thanks again!
  16. I've tried it with the context path and without (and updating the nginx conf) and no luck. Works locally both ways just fine, remote doesn't work at all with the "/airsonic" and without it I'm at least getting to the nginx server for the 502 error. I'd love to get it working as a subdomain as everything else I use is.
  17. Has anyone gotten this working with Letsencrypt/Nginx proxy? For the life of me I can't figure it out. All of my other dockers are working fine, including booksonic (which is basically the same-ish). It's working locally, but externally I'm getting a 502 Bad Gateway error. I've got the port set to 4041 (and forwarded in my router), I've got the airsonic subdomain in my letsencrypt, here's my airsonic.subdomain.conf file for Nginx: # make sure that your dns has a cname set for airsonic and that your airsonic container is not using a base url # to enable password access, uncomment the two auth_basic lines server { listen 443 ssl; server_name airsonic.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_airsonic airsonic; proxy_pass http://$upstream_airsonic:4041; } } Here's my docker run command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='airsonic' --net='proxynet' -e TZ="America/Chicago" -e HOST_OS="unRAID" -e 'PUID'='99' -e 'PGID'='100' -e 'JAVA_OPTS'='-Xms256m -Xmx512m' -p '4041:4040/tcp' -v '/mnt/user/':'/music':'rw' -v '/mnt/user/':'/playlists':'rw' -v '/mnt/user/':'/podcasts':'rw' -v '/mnt/user/':'/media':'rw' -v '/mnt/user/appdata/airsonic':'/config':'rw' 'linuxserver/airsonic' 88152de7001e5ea4e5698b1b20129d8eb3ebbce506ebf49e2954f967e217f5c5 The command finished successfully! Thoughts? Thanks for any help
  18. These are still available, will entertain any offers. Also have a Trendnet TEG-S24g available, it's in perfect shape, hardly used, asking $75 shipped to the US.
  19. I did this recently (well, not too recently) and it worked perfectly. Only issue I could imagine is if you were booting off of a disk image and not updating the flash drive when updating unRAID. Of course, any esxi VMs will be no more and you're need to figure out how to migrate those. I just started over from scratch so I can't help there.
  20. I'd have to go with Google Play Music. I subscribe so I can get almost everything I want. I also love that they let you upload, up to 20,000 songs I think, of your own music. So, for me, those that aren't on Google Play Music, I can upload and stream to any of my devices. I used to run Subsonic/Music Cabinet/Madsonic (played around with them all) and still do for some friends. However, for $15 a month for 6 of us, I've been much happier with GPM. Reason I went with GPM versus Spotify is the ability to upload your own files. I listen to some very obscure bands that aren't on either service. With GPM, I upload them once and have access everywhere. With Spotify, you can have to have the files on the device you're listening on. Just my two cents. For Subsonic, I used DSub. Only have Android devices so can't help for your wife.
  21. I've really only played around with VMs currently, as I've felt very contrained by the RAM situation. That said, I would ideally be able to run at least 1 if not 2 Libreelec VMs (I know they don't need much ram) and 2 or 3 other VMs mainly just to play around in.
  22. Thanks ashman. I've been getting by with 16GB for the last few years so doubling that I would definitely think could get me a couple more. That said, for a couple hundred more, I'm tempted to do something like this. I'm assuming that would be a major upgrade over what I'm currently running. Thoughts?