relink

Members
  • Posts

    235
  • Joined

  • Last visited

Everything posted by relink

  1. I just realized that midnight commander is built in to unraid. Woul I just stop the mover and then use that to move the folders from “/mnt/cache” to “/mnt/user”?
  2. What command would I need to run to move everything from cache to array? unless there’s some reason I shouldn’t, I will gladly stop the mover and do that instead.
  3. As the title says I am trying to upgrade my cache disk so I am moving everything to the array. I stopped docker and VM services first, and set all shares to use cache yes. The mover is running but it’s been over 12 hours and it’s not even half done. I would have expected a transfer time closer to 6 hours. so far I have used “renice” and “ionice” to set the priorities of the processes involved in moving higher, but it doesn’t seem to have made much difference. I was hoping to do that cache upgrade this morning, but at this rate I’ll be lucky if I can do it by tonight.
  4. So I e been giving this some thought over night and I’d really like to take a stab at this. if all I need to do is make sure the file/directory exists on both the cache and the array, that’s easy enough to do. the tricky part is how do I find out when the file was last accessed? Considering the extra write required I doubt unRAID keeps track of “atime” on files. And I don’t know how to use the Plex API but this could be made really simple I’m thinking. As an example If the movie has been watched in the last week copy its entire directory (so the srt files come too) to the cache. The script runs daily, and checks the last time a movie was watched, once it’s been more then 7days the copy gets deleted. As for TV shows, I’m thinking it should copy entire seasons but this could be messy quick. Especially if - like me - you run something like DizqueTV.
  5. It would be awesome if you did, I actually already use your Plex preload script. I was analyzing it to see if I could reasonably tweak it, but it’s a bit over my head. media files like movies, tv, and maybe music would be the top targets for something like this. A good chunk of my Plex usage is my kids watching the same shows and movies on repeat, but they also will randomly change it up for a few weeks to something else. If the script could cache the entire series of a show, that would be awesome too. I could literally go days without the array spinning up at all. and I love the idea of copying the data instead of moving it. That way if one of the SSDs were to fail you wouldn’t loose any data.
  6. So, I have 2 kids, and for any of you who also have kids you know they watch many of the same things on repeat all day. So instead of having to keep disks spinning, I was thinking to just move their favorite movies to the cache. This would actually keep all my disks from spinning for a large portion of the day. But of course, I don't want to just do it manually, thats not any fun. So I wanted to see if there was any way to see what the top 10 most watched movies and maybe shows are in plex and move them to and keep them in cache, and if the favorites change over time it will automatically move the less watched back to the Array and pull in the new most watched. figured it would be something fun to try a write a userscript for. Any thoughts? I doubt Im the only person to want to try something similar.
  7. You know what, i've seen that link down there numerous times, but I can honestly say i've never noticed it, if that makes any sense.
  8. I found the docs for 6.9 after googling for them. (idk where to find the link on the forums). Correct me if I'm wrong, but it's sounding like all I need to do is add the new nvme drive, assign it to a second pool. Then set which pool I want each share to use in the share settings, and then run the mover...is that about it?
  9. I am wanting to add a nvme SSD as a second cache pool to use exclusively for docker and VMs. And I want to keep my existing sata cache pool exclusively for write caching shares. what would be the proper way to do this? And where can I find info on multiple cache pools. I am assuming I would want to make the nvme the first pool so I don’t have to re-map all my containers, and then make the sata the second, but that’s just a guess. I actually have no idea how this would work.
  10. I just had the exact same issue. For me it seemed to be the VM Backup beta plugin. I removed it and rebooted, now everything is fine.
  11. I have been using the Wallabag docker for quite a while now and its been great. However Suddenly (sometime in the last couple days) I cannot access it anymore locally or remotely. I checked the logs and see a few things: [WARNING]: Found both group and host with same name: localhost [WARNING]: Platform linux on host localhost is using the discovered Python 2021/03/02 12:28:41 [error] 270#270: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught PDOException: SQLSTATE[HY000]: General error: 1 no such table: wallabag_internal_setting in /var/www/wallabag/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:67 Im not sure if the two things even have anything to do with each-other, but it was all that stood out to me in the logs.
  12. yup its all good now. everything seems to be working fine.
  13. SD is now saying the issue has been resolved, but I am still getting the exact same error.
  14. Ahh ok, So it seems im not the only one... is this something that needs to be implemented in G2G or is it something that SD needs to fix?
  15. So I created a separate guide2go container for testing, if I copy over my json files and run the commands I get the same issue as before, I can see the xml and cache files get created but there's no guide data in them. However, if I delete my files and try to start over, I notice that a new json file never gets created. Also that error: [ERROR] json: cannot unmarshal number into Go struct field SDSchedule.stationID of type string I swear I've seen this error before...I just can remember where. Ive checked my post history, and searched my email and found nothing...but I could wear I've seen that error before...
  16. Thats what I did to get the output I got. I ran; docker exec -it xteve_guide2go guide2go -configure /guide2go/1.yaml and chose option #5 " 5. Create XMLTV File [/guide2go/1.xml]"
  17. I didn't catch that part fully on my first read, how do you get the "full log"? Also if you already sent it, do I need to reply to the SD email or are they already aware of it?
  18. I just got a reply and was coming here to say that. lol. thank you. Hopefully I can get it sorted, because - of course - my in-laws just got into town yesterday and are staying with us.
  19. Xteve_guide2go suddenly wont pull in any guide data. I've had this up and running just fine since around February I believe. Nothing in my lineup has changed except I had to remove a lineup that wasn't needed needed anymore. I checked my schedules direct account first, and its fine. I tried running the cronjob manually, as well as trying to create an XML manually and either way I get the following output: Configuration [/guide2go/1.yaml] -------------------------------- 1. Schedules Direct Account 2. Add Lineup 3. Remove Lineup 4. Manage Channels 5. Create XMLTV File [/guide2go/1.xml] 0. Exit Select Entry: 5 2020/09/22 09:07:55 [URL ] https://json.schedulesdirect.org/20141201/status 2020/09/22 09:07:55 [SD ] Account Expires: 2021-02-23 23:51:01 +0000 UTC 2020/09/22 09:07:55 [SD ] Lineups: 2 / 4 2020/09/22 09:07:55 [SD ] System Status: Online [No known issues.] 2020/09/22 09:07:55 [G2G ] Channels: 70 2020/09/22 09:07:55 [URL ] https://json.schedulesdirect.org/20141201/lineups/USA-DISH534-DEFAULT 2020/09/22 09:07:56 [URL ] https://json.schedulesdirect.org/20141201/lineups/USA-FL09649-X 2020/09/22 09:07:56 [G2G ] Download Schedule: 14 Day(s) 2020/09/22 09:07:56 [URL ] https://json.schedulesdirect.org/20141201/schedules 2020/09/22 09:07:58 [ERROR] json: cannot unmarshal number into Go struct field SDSchedule.stationID of type string 2020/09/22 09:07:58 [G2G ] Download Program Informations: New: 0 / Cached: 0 2020/09/22 09:07:58 [G2G ] Download missing Metadata: 0 2020/09/22 09:07:58 [G2G ] Create XMLTV File [/guide2go/1.xml] 2020/09/22 09:07:58 [G2G ] Clean up Cache [/guide2go/1_cache.json] 2020/09/22 09:07:58 [G2G ] Deleted Program Informations: 0 If I run the cronjob instead its the same output, except for both of my lineups, however it finishes with the line below. { "status": true }{ "status": true }<html><head><title>Unauthorized</title></head><body><h1>401 Unauthorized</h1></body></html> Not sure what changed, so im really not sure how to fix this.
  20. Ok this is going to sound ridiculous...I think it may actually be the macOS version of Firefox. I primarily access my unraid server through my laptop which is a hackintosh running macOS Catalina. Anyway, I was on my phone and I accidentally tapped on my unraid tab in my browser and poof it loaded almost instantly. I then connected to my Windows 10 desktop (with a probably very outdated version of Firefox) and tried to load my unraid server, again, it loaded almost instantly. I then went back to Firefox on macOS and it was the same crap show as earlier. I then opened Safari on macOS and logged in there and it loaded instantly...
  21. I have not tested safe mode, but I have tried it with Dockers and VMs disabled.
  22. Ok yah, I must have just not noticed because Dockers are running fine. But there is definitely something very wrong. I have uploaded a diagnostics from before and then after a reboot. Even after a reboot something isn't right. I stopped the array, so nothing was running, and something was still very off, the whole UI is incredibly slow. To clarify I had to reboot from the command line over SSH because the buttons in the WebUI were unresponsive. serverus-diagnostics-20200902-1014.zip serverus-diagnostics-20200902-0941.zip