RT87

Members
  • Posts

    44
  • Joined

  • Last visited

Everything posted by RT87

  1. btw: that worked flawlessly
  2. okay, thank you very much, I will try switching the disks, I think in my particular case that is probably the better method.
  3. If I have another drive which could serve as cache (which incidentally I do), I assume I could just copy the data and use that drive (after proper init), correct?
  4. That's the only option? (sry!)
  5. ok, that at least worked worked in starting the array with the disk, but now it indeed says "unmountable: wrong or no filesystem"
  6. okay, so.... I cannot start the array without the disk, i.e. "Missing", and when I start it using the correct disk (although not recognized), it says "Unmountable: No pool uuid".
  7. nope, not the regular array :)! Thanks for your help, I'll try!
  8. sry, Ich meant "cache pool" (which to my knowledge have parties or at least redundancy, although I haven't used that so far) Does your answer still apply? Because I can start the array, even though the next to the cache disk it says "Wrong", but I am unsure of what will be the result...
  9. the pool is basically only this drive (no second drive, neither parity nor regular), if that is what you mean
  10. Hi, I changed my pool drive from an internal SATA conncetion to an external (SATA case plugged in as regular USB; I know it's not exactly "recommended" to do so). Now it doesn't recognize the disk since apparently the UUID has changed (I assume the SATA case transmits the ID of the controller chip or similar). Can I explicitly tell UNRAID to recognize this drive as the one that has been in that exact position? I.e. by manually changing the UUID mapping? Alternatively, how can I resolve this situation without compromising my pool? Best and thanks Richard
  11. is there any progress to this? I am still experiencing this very issue with v2.1 XP.
  12. not sure what you mean by "original". the device itself does not get recognized, not a particular partition. gparted (e.g.) simply does not show anything but my regular internal drives.
  13. exactly... I thought maybe this occurred before and somebody knows the root cause of this issue... OK, so a new usb drive it is ;). Anyway, thanks!
  14. true, but how come that I was still able to boot from it? The loader detected it just fine and the gui/shell came up as expected... just the PW didn't work
  15. I have several backups, the latest one right from before the migration, so that should not be an issue. I guess I could do that, but I would probably order a new usb drive first, so as to avoid an old one having any issues. however, I would be a little sad that this upgrade (kind of) broke my usb drive. not the worst thing in the world, but still...
  16. I agree... however, when I try to do just that, the USB stick is no longer recognized.... neither using windows (neither native, with the unraid flasher or with several other tools), nor using linux. Since I was able to boot unraid from it, I assume it Is this some VERY weird software bug. If my drive just would have died, that should not have been possible... Any thoughts XP?
  17. I am experiencing this issue as well with 6.10.3 (migrating from 6.9). Login also fails when using ssh instead of the GUI login (and no, password is correct ;)).
  18. well almost, I still would like to just have it run with the mover-job. Just as a regular no-plugin-or-config-required setup, simply by using a new pool-usage-type "prefer-cache-but-sync-to-array-during-moving" I'm a little worried that people will neglect this because they may not fully realise the danger of running everything on your cache without a cache-array/backup. But maybe I'm a little off here, I don't know, as I said, I'm happy with my solution, I just think a more elegant way might be nice.
  19. It wouldn't need to be the same path, I could simply backup the appdata cache-path to /mnt/user/appdata_cache_backup or somthing similar.
  20. I guess both pretty much do the same thing, featurewise, I just went with Vorta because it uses borg and it's something I can use more flexible, i.e. it's not unraid specific. With "official" I meant somehintg along the lines of "another cache use-type", just plain and simple, no versioning or anything, just hold-on-cache-and-sync-to-array-via-cronjob
  21. For now I'm using Vorta, but sure, any reliable backup solution will do... but still, I can't be the "only one" with this use-case, so an "official" way would be beneficial.
  22. Yes of course, but to be fair: Options always have a positive value. If I don't want this behaviour, I simply untick the corresponding box... At least all array operations would have to stop immediately, yes. If you have a cache, that could continue working or at least shutdown gracefully. This option might also be for the super cautious, who are less interested in reducing or eliminating off-time, but rather in avoiding data loss as much as possible. But I agree, if this needs a significant rework, it won't get implemented. Oh well, worth a shot, that's why I asked whether it would make sense. Thanks guys! Since I got a little side-tracked (mea culpa, btw!) with lots of other interesting stuff: Does anyone know of any way to get my parity working, at least for the moment using USB? I see your points regarding its weaknesses, but I will not be able to switch to a better solution right away, because I have no low-power components for a "true" server lying around, and thus I would first have to read up on that, order all the stuff and put it together XP.
  23. makes sense.... although I guess with some form of bookkeeping on a separate drive it would work, but to be fair that's maybe a bit over the top :P. I would still love a do-not-auto-emulate-drives option... would it make sense to put that into a feature request?
  24. Thanks a lot for the /dev and /mnt path clarification! Okay, I see, btw also thx for that info! Is there some switch with which to tell unraid to NOT emulate a disk right away, but rather give errors on failed reads/writes first? That in combination with a "emulate failed disk" button would be a workaround to avoid disabling drives due to mere hickups. Furthermore: If any drive becomes disabled, but it is not truly broken, i.e. such a hickup has occurred: Can Unraid "catch up" to writing the data (from the parity-data) to the (temporarily) disabled drive, or, if at any point in time any drive got "disabled", do I automatically have to rebuild the entire parity/disabled disk?