Syco54645

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by Syco54645

  1. 19 minutes ago, abhi.ko said:

    Thank you - appreciate the quick responses @Syco54645.

     

    If anyone else feels that I should not attempt this with the VM running please let me know - otherwise I plan to proceed with this plan. 

    No problem. I wish you the best of luck. Before I did the conversion I was super anxious but after I started it really wasn't that bad. Just takes forever. 

  2. There was no issue leaving them "unbalanced".

     

    I do not have any VM but I would think home assistant will be ok. I am running my Home Assistant in a docker container but want to switch to home assistant os at some point. 

  3. 11 minutes ago, abhi.ko said:

    Apologies for dusting up this old thread - but I am trying to change FS on multiple data disks from reiserfs to xfs.

     

    So, I believe the above steps should work for me, I just have to do them multiple times for each of the reiserfs disks. Is that accurate?

     

    Questions -

    1. Would I have to stop all dockers and VM's while doing this?
    2. Do I need to copy the contents of the disk back to the original disk after formatting it to xfs. If I don't, is there anything in config that i need to change?

    Steps to follow for multiple disks:

    1. Shutdown VM and docker
    2. From Disk 1 (reiserfs) move contents to empty disk 20 (xfs)
    3. Stop Array
    4. Change Disk 1 to xfs 
    5. Restart array
    6. Repeat Steps 2-5 for next disk, until all are xfs 
    7. Restart Docker and VM

    One of my VM's is Home Assistant - so this would mean that all the smart home related stuff will go down during this process, is there another way?

    If your containers write to the array you'd have to turn those off. Or if you store your docker image on the array. Stuff that only reads and stores data on the cache drive (like Plex) can be left running. 

     

    You don't need to copy the contents back to the now empty drive. 

     

    You are missing the formatting of the xfs disk but otherwise your steps seem correct to me. 

    • Like 1
  4. 5 minutes ago, itimpi said:

    It looks like the Minimum Free Space setting for your share is less than the available space which is why you get told there is no space available.

    Thank you. I had somehow missed that.

     

    edit: I cannot change this value to 0 for some odd reason. Is there something per device that must be set up as well?

  5. So I am in the midst of creating a new server and I currently have a 1tb disk for the cache pool and a 12tb disk for the array. There is no parity configured at this time. On the Main tab in the UI it shows the disk/array has 1.2tb free yet when I try to move anything else to the array it is reporting no space left on the device. Does anyone have any ideas?

     

    image.thumb.png.0bcdd76faa6ebfe1476840e6e234c3ab.png

     

    image.png.e4ba8cc2f32f883e1069500e4a5c02a4.png

    indigojr-diagnostics-20240320-2003.zip

  6. 1 hour ago, bonienl said:

    None of the Unraid updates change this setting, only manual (=user) would change this.

     

     

    I don't really care how it happened just making it known that the setting was apparently enabled automatically on my server. The last time I went into disk settings was to set the default from reiserfs to xfs. I did this when xfs was being pushed over reiserfs. I don't even see how I could have accidentally done this. For me to make a change to the md_write_setting I would have to go into disk settings, change it, and then click save. 

     

    Regardless. It is solved and that is all that truly matters. 

  7. 2 hours ago, JorgeB said:

    I would say that should be impossible, have you ever used the turbo write plugin?

     

    I have not. Speed of writes has never been a concern of mine. Just odd that it was on as I never turned it on, I did not even know this option existed. It has been probably 4 years since I went into a disk settings. 

  8. So I have been having an issue for a while that nearly all of the drives in my array are always spun up. Currently only 1 out of 13 are in standby mode. Yesterday I tried stopping all docker containers and enable them 1 by 1 until it happened. There does not appear to be a correlation there, randomly the drives are just spinning back up. Leaving the containers stopped I saw drives randomly spin up. There was no file access going on during this time. Oddly the read/write counts do not change and the speed is at 0.0B/s for all of the drigves  I have tried to use the OpenFiles plugin and the only files being accessed are on disk12 by frigate (issue existed well before frigate was installed), I need to move that off of the array but it is known. Attaching my diagnostics to see if anyone can figure it out.

    essex-diagnostics-20230411-0823.zip

  9. So I updated the container today and it is failing to boot. Attaching the log below.

     

    *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
    *** Running /etc/my_init.d/05_set_the_time.sh...
    Setting the timezone to : America/New_York
    *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
    *** Running /etc/my_init.d/05_set_the_time.sh...
    Setting the timezone to : America/New_York
    
    Current default time zone: 'America/New_York'
    Local time is now: Sun Nov 13 17:59:49 EST 2022.
    Universal Time is now: Sun Nov 13 22:59:49 UTC 2022.
    
    Date: Sun Nov 13 17:59:49 EST 2022
    *** Running /etc/my_init.d/10_syslog-ng.init...
    syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
    
    [2022-11-13T17:59:49.440067] file ../../../glib/gthread-posix.c: line 1339 (g_system_thread_new): error 'Operation not permitted' during 'pthread_create'
    
    /etc/my_init.d/10_syslog-ng.init: line 32: 33 Trace/breakpoint trap /usr/sbin/syslog-ng --pidfile "$PIDFILE" -F $SYSLOGNG_OPTS
    *** /etc/my_init.d/10_syslog-ng.init failed with status 1
    
    
    *** Killing all processes...

     

  10. 2 minutes ago, JorgeB said:

    You can also do that, but add an end slash to the source path or it will create a folder called old-disk-12 on destination:

     

    rsync -av /mnt/disks/old-disk-12/ /mnt/disk12

     

    Thanks for confirming that will work. And woops, thanks for pointing that out. That was more of a pseudo command as I am actually doing it per folder on old-disk-12 but this will be useful for someone else if they are trying to do the same things.

  11. 4 minutes ago, JorgeB said:

    That is what you should have done, formatting is never part of a rebuild, now and if you still have the old disk, do a new config wit the old disk (Tools -> New config), re-sync parity, then replace the disk.

    Why not just use the old disk and issue the following command. This would avoid having to do a parity resync and fat fingering the new config setup. Not to mention I will maintain parity protection through this process.

    rsync -rav /mnt/disks/old-disk-12 /mnt/disk12

     

  12. 1 minute ago, JorgeB said:

    You don't lose data, just upgrade Unraid and repeat the rebuild, unassign the disk, start array, stop array, re-assign the disk, start array to rebuild.

     

    That did not work for me, I had no option to rebuild, only format. The drive is completely empty. It seems the only route forward is copying the data from the old xfs drive over to the array.

  13. I apparently lost data because of this bug. I replaced a 2tb with a 14 tb and now the data is gone. I still have the old 2tb disk and have downgraded to 6.9. What can I do to fix this? 

     

    Edit: So I thought I would just mount the old drive and write the files back to the new drive... I cannot mount the old drive via usb, I get an invalid super block error. The drive is formatted xfs. If I put the disk back into the system via a sata connection I am able to mount it via the unassigned devices plugin but cant when connected via usb. Any ideas here?

  14. 5 hours ago, FoxxMD said:

    @Syco54645 I'm going to need more detail than that.

     

     

    What failures exactly? Do you see anything in the logs for the multi-scrobbler container?

    I think most likely your callback url is incorrect -- it has to be registered with your last.fm application and it needs to be exactly where last.fm will redirect back to multi-scrobbler.

     

    I posted on github after I saw the message to post there instead so will continue there. Posting a reply currently as it looks like all config values are correct.

     

    To answer the various failures were caused by not including a callback url. The comment for that line in the json samples is down a line and caused confusion on my end.

  15. Trying to get this setup and working and having issues with the last.fm client. It is telling me "Status: Auth Interaction Required". When I click the link to (re)authenticate or initialize I get various forms of failure. I have created an API key and put the values in to the config.json. I am unsure of how it should function when I get passed to lastfm and then back to the server http://tower.local:9078//lastfm/callback?state=FrankLFM&token=XXXXXXXXXXXX

     

    At this point the page says OK and that is it. Heading back to the application it still says that last.fm is not authenticated. Any idea what I am missing?

     

    2021-09-28T09:33:10-04:00 info : [App ] Server started at http://localhost:9078
    2021-09-28T09:33:10-04:00 info : [Sources ] (partyPlex) plex source initialized
    2021-09-28T09:33:10-04:00 info : [Source - Plex - partyPlex] Initializing with the following filters => Users: N/A | Libraries: party | Servers: N/A
    2021-09-28T09:33:10-04:00 info : [Sources ] (FrankPlex) plex source initialized
    2021-09-28T09:33:10-04:00 info : [Source - Plex - FrankPlex] Initializing with the following filters => Users: frank | Libraries: N/A | Servers: N/A
    2021-09-28T09:33:10-04:00 warn : [Scrobblers ] (FrankLFM) lastfm client auth failed.
    2021-09-28T09:33:10-04:00 error : [Client Lastfm - FrankLFM] Error: Invalid session key - Please re-authenticate
    at CWD/node_modules/lastfm-node-client/lib/ApiRequest.js:136:11 at processTicksAndRejections (internal/process/task_queues.js:95:5) at async LastfmApiClient.callApi (file://CWD/apis/LastfmApiClient.js:84:20) at async LastfmApiClient.testAuth (file://CWD/apis/LastfmApiClient.js:148:30) at async LastfmScrobbler.testAuth (file://CWD/clients/LastfmScrobbler.js:34:27) at async ScrobbleClients.addClient (file://CWD/clients/ScrobbleClients.js:240:27) at async ScrobbleClients.buildClientsFromConfig (file://CWD/clients/ScrobbleClients.js:195:17) at async file://CWD/index.js:128:9 2021-09-28T09:33:10-04:00 error : [Client Lastfm - FrankLFM] Could not successfully communicate with Last.fm API
    2021-09-28T09:33:10-04:00 error : [API - Lastfm - FrankLFM] Testing auth failed
    2021-09-28T09:33:10-04:00 info : [Scrobblers] (FrankLFM) lastfm client initialized

     

  16. 9 hours ago, itimpi said:

    That would work fine..

     

    as to whether there is a ‘better’ method that really depends on what tools you are most familiar with.  Using mv in a screen session is fine if you know how to do that.  You could achieve similar results using any file manipulation tool such as rsync, mc (midnight commander), Krusader running as a Docket containet, file manager on another machine etc.

    Thanks! This greatly simplifies things for me.

     

    Also thanks trurl for helping as well.

  17. 13 minutes ago, trurl said:

    That is incorrect. Writing any disk in the parity array updates parity at the same time.

     

    In that case the process seems to be 

    1. mv /mnt/diskX/* /mnt/diskY/
      1. Once complete make sure all files are moved
    2. Stop array
    3. Go to Disk Settings and change the File system type to xfs
    4. Start array
    5. Format the drive that is needed

    Would that be a good assessment of what I need to do? Is there a better way to move the files than just mv in a screen session?

  18. 32 minutes ago, trurl said:

    I don't use unBALANCE so I don't know exactly how it plays with disks and user shares.

     

    If you don't want any new files written to a disk while writing to user shares you can exclude the disk from all user shares in Global Share Settings. That would also mean no files from that disk would be included when reading user shares.

     

    What method do you recommend for moving files from one drive to another within the array? I thought doing a "mv /mnt/disk1/* /mnt/disk11/" would not update the array. If I am incorrect then I have been given wrong information and this will be quite an easy process.

     

  19. 4 minutes ago, trurl said:

    Didn't notice before but your disk6 is new, though not showing up as having any capacity yet. Looks like it must be clearing

    Sep 19 13:17:32 Essex kernel: md: recovery thread: clear ...
    

     

    All write operations to the array update parity at the same time so parity remains in sync.

    In addition to writing a file, move, copy, delete, and even format, are all write operations that update parity.

     

    So how can I empty a drive and not have any new files being added to it at the same time? I guess that is the only stumbling block at this point.

  20. 24 minutes ago, trurl said:

    Changing filesystem of a disk requires formatting the disk to the new filesystem. So, you have to put its files somewhere else before formatting.

     

    All of your disks are very full. The simplest way to make space would be to add a disk to a new slot, or replace a disk with a larger disk.

    Yes I am in the process of waiting for an 8tb to add to the array for space and in hopes of starting conversions.

     

    What would the process look like for this? I would assume I have to stop anything that writes to the array and then use unbalance to move the files and this is where I get fuzzy. When I move the files are they moved as far as the array is concerned or will I have to rebuild parity?

  21. 1 hour ago, trurl said:

    New Config isn't particularly dangerous, but it also isn't required (or necessarily even useful) for converting. Not sure where you got that idea.

     

    Wouldn't using new config and placing a drive in the wrong location cause data loss? I do not trust myself to not mess up somewhere along the line with 10 drives to convert.

     

    1 hour ago, trurl said:

    Do you need some advice on converting?

     

    Yes please if you can give advice I would greatly appreciate it (I am on 6.9.0 if it matters). I have posted on the reddit community multiple times and was told that I basically must use unbalance to move all of the files off of the drive then use new config to change the format of the empty drive. This video was given to me as the basic method, just not following the encryption bit.