• Posts

  • Joined

  • Last visited

Everything posted by Syco54645

  1. I posted on github after I saw the message to post there instead so will continue there. Posting a reply currently as it looks like all config values are correct. To answer the various failures were caused by not including a callback url. The comment for that line in the json samples is down a line and caused confusion on my end.
  2. Trying to get this setup and working and having issues with the client. It is telling me "Status: Auth Interaction Required". When I click the link to (re)authenticate or initialize I get various forms of failure. I have created an API key and put the values in to the config.json. I am unsure of how it should function when I get passed to lastfm and then back to the server http://tower.local:9078//lastfm/callback?state=FrankLFM&token=XXXXXXXXXXXX At this point the page says OK and that is it. Heading back to the application it still says that is not authenticated. Any idea what I am missing? 2021-09-28T09:33:10-04:00 info : [App ] Server started at http://localhost:9078 2021-09-28T09:33:10-04:00 info : [Sources ] (partyPlex) plex source initialized 2021-09-28T09:33:10-04:00 info : [Source - Plex - partyPlex] Initializing with the following filters => Users: N/A | Libraries: party | Servers: N/A 2021-09-28T09:33:10-04:00 info : [Sources ] (FrankPlex) plex source initialized 2021-09-28T09:33:10-04:00 info : [Source - Plex - FrankPlex] Initializing with the following filters => Users: frank | Libraries: N/A | Servers: N/A 2021-09-28T09:33:10-04:00 warn : [Scrobblers ] (FrankLFM) lastfm client auth failed. 2021-09-28T09:33:10-04:00 error : [Client Lastfm - FrankLFM] Error: Invalid session key - Please re-authenticate at CWD/node_modules/lastfm-node-client/lib/ApiRequest.js:136:11 at processTicksAndRejections (internal/process/task_queues.js:95:5) at async LastfmApiClient.callApi (file://CWD/apis/LastfmApiClient.js:84:20) at async LastfmApiClient.testAuth (file://CWD/apis/LastfmApiClient.js:148:30) at async LastfmScrobbler.testAuth (file://CWD/clients/LastfmScrobbler.js:34:27) at async ScrobbleClients.addClient (file://CWD/clients/ScrobbleClients.js:240:27) at async ScrobbleClients.buildClientsFromConfig (file://CWD/clients/ScrobbleClients.js:195:17) at async file://CWD/index.js:128:9 2021-09-28T09:33:10-04:00 error : [Client Lastfm - FrankLFM] Could not successfully communicate with API 2021-09-28T09:33:10-04:00 error : [API - Lastfm - FrankLFM] Testing auth failed 2021-09-28T09:33:10-04:00 info : [Scrobblers] (FrankLFM) lastfm client initialized
  3. Thanks! This greatly simplifies things for me. Also thanks trurl for helping as well.
  4. In that case the process seems to be mv /mnt/diskX/* /mnt/diskY/ Once complete make sure all files are moved Stop array Go to Disk Settings and change the File system type to xfs Start array Format the drive that is needed Would that be a good assessment of what I need to do? Is there a better way to move the files than just mv in a screen session?
  5. What method do you recommend for moving files from one drive to another within the array? I thought doing a "mv /mnt/disk1/* /mnt/disk11/" would not update the array. If I am incorrect then I have been given wrong information and this will be quite an easy process.
  6. So how can I empty a drive and not have any new files being added to it at the same time? I guess that is the only stumbling block at this point.
  7. Yes I am in the process of waiting for an 8tb to add to the array for space and in hopes of starting conversions. What would the process look like for this? I would assume I have to stop anything that writes to the array and then use unbalance to move the files and this is where I get fuzzy. When I move the files are they moved as far as the array is concerned or will I have to rebuild parity?
  8. Sure
  9. Wouldn't using new config and placing a drive in the wrong location cause data loss? I do not trust myself to not mess up somewhere along the line with 10 drives to convert. Yes please if you can give advice I would greatly appreciate it (I am on 6.9.0 if it matters). I have posted on the reddit community multiple times and was told that I basically must use unbalance to move all of the files off of the drive then use new config to change the format of the empty drive. This video was given to me as the basic method, just not following the encryption bit.
  10. So I am sitting here with an server that was setup when ReiserFS was still the recommended file system and am looking for a simple way to convert these drives to XFS and am coming up empty. It seems that ReiserFS will eventually be fully abandoned in favor of other file systems. The issue is all of the current processes require the somewhat dangerous tasks of creating a new config. Would it be possible to create a feature to aid in this process?
  11. So I am using this for 7 days to die and am trying to disable automatic updates to the server. i see mention of doing it but not instructions. can someone please help me out?
  12. Yes at this point just going to wait for the lsi. Nothing else is going to get me up and running I think. Probably will swap to the lsi card and start the array, making sure anything that may write to the array is stopped and allow the parity sync to happen. The errors were only cropping up last night during a sync.
  13. I have an LSI card on order, just waiting for it to arrive. I am seeing that Marvell is not recommended for Unraid. This card was recommended to me by someone in #unraid on freenode, not that it matters just giving my reason for purchasing. I thought one of the Marvell cards was bad as when I moved the drives to the onboard sata and the other card the errors went away, till I tried to rebuild then I got a bunch of read errors. I have done multiple extended smart tests on the drives that are having said problems and they check out fine every time (except for disk 9). Physically my server is a 4u rackmount rosewill case and the drives are all in Norco ss500 cages. Card0 had 8 drives on it and card1 had 4. Cage0 and 1 were on card0 and cage2 was on card1 I do not think the issue lies there as I was first having issues with all drives on card0 which were in cage0. The 5th drive was working fine, it was plugged into the onboard sata ports. I have removed drives from card0 and moved the drives that were having issues to onboard sata. The other 4 to card1 and I am now having issues with the drives originally on card1. Because of this I do not think the issue is in the cages as I am having issue across all three. How would you suggest I test for stability? Create a new server on a different stick and toss drives in it? That actually does not sound like a bad idea to me now that i say it. I have a pile of drives that were pulled for age but were still performing well. i can create a new server with those on the current hardware and see if I have the same issues.
  14. Well I read more docs and it seemed like new config was the way to go. I did it and everything seemed fine till it wasn't. Started getting new read errors on completely different drives. Swapped the PSU for a brand new one and it did not make a difference. Here are the diagnostics files. They are not from when read errors were occuring, at least I do not think that they are...
  15. Will try to accurately and completely explain what happened and the state that I am currently in. I have a server with 12 drives and dual parity. The parity disks are 8tb drives. Yesterday the power company stopped by to let me know they would be replacing a pole in front of my house. I shut down my server and went to lunch. When I got back the power was back on so I started my server. Upon booting I had alerts that drives were missing, 5 to be exact. The array did start somehow, not sure how. Docker was also started and a few containers were running... Anyway I stopped the array and the drives were not detected by the os. Thought perhaps it was a random issue so restarted. The drives all came back and the array restarted but 2 were marked disabled and emulated. Then I started to get read errors on 6 drives (including the 2 that were "disabled" so I once again shut down. Checked the layout of cables and turns out all of the drives were all on the same SAS controller. Removed the SAS controller and everything boots fine, all drives present but still the 2 drives are disabled. That would be fine but now I am getting a SMART error of "Current pending sector" with a value of "1" on another drive (it was connected to the SAS controller and one that was tossing read errors). I ran an extended test on the 6 drives that were having read errors and all appear fine except for the one still having the pending sector error. As far as Unraid is concerned I have two failed drives. With the SMART error I am worried about rebuilding parity and having an issue there. I want to throw out that that the two drives that are disabled are a 4tb and an 8tb. I cannot be sure that no data was written to the array when I booted as docker had started and a few containers were running... How would you proceed here? Would it be better to do the new config route and hope that the drives are matching parity? Would it be better to remove the drives then add them back and allow it to rebuild? If so should I add them both at the same time or would it be better to do individual? Would the 4tb rebuild faster than the 8tb? If so that would give me the benefit of only having 1 failed drive if something else happens in this time period. Worried I may have to start a manual recovery of files here and that is going to be a mess...
  16. that would be perfect if there was a place to buy it...
  17. So I am trying to find a case with top to bottom 5.25 external bays. Anyone making those any more? Something like this.
  18. I used wget to download it here /boot/config/plugins/preclear.disk/ then mv <script> I gave up and am just running it in screen.
  19. It's the only one in the dropdown list? yes
  20. so i followed the instructions in the first post and it still says the script is "gfjardim - 0.9.6-beta". is this to be expected?
  21. ugh. i read the post multiple times and somehow missed that line. thanks
  22. is there a way to use this from the gui?
  23. does anyone know which previous version worked?
  24. Actually we could use comskip as long as we have mkvtool as well. Here is some info for it. Would love to see this make it into the docker.
  25. So just saw talk of Plexpy, any chance of getting a plugin for that? Seems pretty awesome for monitoring Plex.