Jump to content

G3orgios

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by G3orgios

  1. @itimpi indeed, the Dynamix File Manager does the job and PHYSICALLY MOVES (COPY & DELETE) the file! Thank you.
  2. I have read that I should be careful NOT to move files from a SHARE to a DISK or vice-versa and making sure to always use Share-to-Share OR Disk-to-Disk MOVING. I assume that this is still valid for the latest Unraid v6.12.9? I have a bunch of disks added as Pools (1-disk per pool) containing a few TB of data that I need to migrate to the newly setup ARRAY. I have been trying to MOVE these data from these disks to the array via MC or Krusader. On a DISK-to-DISK level this works fine and PHYSICALLY MOVES the data to the selected array-DISK. Sadly, this annuls the whole Unraid-balancing, since i have to manually select the array destination-disk... Using MC or Krusader to MOVE on a SHARE-to-SHARE level, seems that it simply maps/links the file to be moved instead of physically moving it, leaving the file residing in the pool-disk. So, how can I PHYSICALLY MOVE files from my pool-disks to the Array? Thank you.
  3. @Squid the 2nd instance works great and installs fine a 2nd docker container. I configure DIFFERENT ports for this 2nd container and i have proper web-ui access in the manually configured port (8081) BUT I keep seeing the SAME PORTS as container-1 under the docker allocations... Why so?
  4. @JorgeB As you said, via the web-UI, removing just one (1) disk kept the ZFS pool intact and accessible. Removing more disks (one per raidz1 vdev, of course!) didn't worked. CLI worked fine! (zpool command reference) removed the zfs pool from the web-UI added it via CLI: zfs pool import [poolname] deactivated one (1) disk per vdev: zpool offline [poolname] [disk] accessing pool fine at /mnt/[poolname] The zfs pool disks show as Unassigned Devices in web-UI but the pool operates fine and i have used the offlined disks to start building the array and migrating data (100TB), Thank you and I hope the new Unraid version to have more flexible web-UI in this aspect.
  5. @JorgeB so, how may I degrade the ZFS pool by removing one (1) disk from each raidz1 group and making the pool as 3groups x 3disks? When you say "v6.12 does not support multiple devices missing from a zfs pool", do you refer to web-UI? Could this be done via CLI, by either un-mounting the three (3) disks OR by removing the whole ZFS pool and then add the ZFS pool with ONLY the 3groups x 3disks only? In theory, this Raidz1 pool remains active even losing up to one (1) disk from each group. I even considered the hard(ware)-approach, by physically removing the 3 disks (one from each group) BUT I would prefer the soft(ware)-approach (e.g. CLI). Any proposals on HOW this can be done? Thank you.
  6. In this 3 x 4disks Raidz1 configuration, in theory i can remove one disk from each Raidz1 group and the overall ZFS pool be still healthy (without redundancy any longer). I need to remove sdd, sdh and sdl disks, to start migrating ZFS to array (btrfs). HOW can I remove (unmount?) these three disks and then add them in the array?
  7. @bmartino1 thank you for the feedback provided! The migration described by Plex didn't worked in the OFFICIAL PMS docker app, even after setting the proper owner (chown -R nobody:users ./) and access rights (chmod -R 755 ./) to all copied files. Strangely the SAME procedure using the BinHex PMS docker app, works fine and all content and metadata appear intact in the new installation (Unraid) from the past one (TrueNAS Core). On a side note, it seems that following the OFFICIAL instructions, I have "sign out of your account under Settings > Server > General in Plex Web App" which led to losing the server and thus all configured shared-users and history! Do NOT do so! The first time I tried the BinHex install I have NOT signed out of the new-server and all users and history was maintained just fine. After a few tries to make the OFFICIAL PMS docker app to work, I did the mistake to remove the server and thus lost users and history. ...really strange how one docker (BinHex) worked fine but the other (Official) didn't. Possibly the one you are proposing (LinuxServer) might work as well. I would prefer to use the official docker-app (I tried quite a lot and even broken the users and history!) but since another release does the job i will stick with this one I guess!
  8. Seems that the Web-UI disappearance was a glitch and after a container restart it was fine. Still, although I stopped the container, unzipped all past Plex files (FreeNAS), restarted the container, the server is visible via Web-UI and registered to my account but NO Library is visible, showing just as a fresh PMS install. Any ideas?
  9. Hello, I am trying to migrate Plex-Media-Server (PMS plex-pass) from TrueNAS (core, v13) to UnRaid (v6.12.9), on the same system (Supermicro). I have already imported the ZFS pool of TrueNAS in Unraid and trying to follow the "Move an install to another system" Plex guide. I have made the app install (docker container) but after unzipping all TrueNAS files & folders (all expect the not-needed Cache folder) PMS went “dark” and although the container starts there is no web-UI functionality any longer. Setting all files and folders to “root” user as owner (the user under which PMS was installed) made no difference. Any ideas/hints on how TrueNAS PLEX can migrate to UnRaid please? Thank you.
  10. UPDATE: seems that the hic-up was in the disks' partitions, namely partion-1 (FreeBSD-swap) had to be manually deleted from EACH disk. Please refer to
  11. Hello, I am migrating from TrueNAS to UnRaid. I try to import a TrueNAS ZFS pool. I have added a pool with all 12 disks via GUI (I had initially used zfs and 3groups X 4disks but after reading here above I switched it to auto): On CLI for "zpool status" i get "no pools available". Starting the array does not start/mount the pool. Importing the pool via CLI shows: What am I missing? Thank you.
×
×
  • Create New...