quei

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

quei's Achievements

Noob

Noob (1/14)

0

Reputation

  1. ok thanks. just one last question... i will start now with the movement of the data. The plan was to move disk by disk, which means I'm currently running unbalanced to transfer all files from disk 1 to disk 4. Once disk 1 is empty, I'll reformat it to BTRFS and repeat this process until all disks are on BTRFS. Is this the correct approach for migrating like this? I'm asking because I encountered the message "Unmountable: Unsupported or no file system." This should change once all disks are using BTRFS, correct?
  2. would you recomend something completly different?
  3. Thanks for the feedback! So, if I understand correctly, I should transfer all the disks to a new btrfs array. I believe the easiest approach would be to create a new btrfs array and transfer the data there. Finally, adding the parity at the end would be the last step due to the speed of the data transfer.
  4. Hey everyone! So, I've had some experience with extending ZFS pools in the past, and let me tell you, it's not always a walk in the park. There are definitely some limitations and boundaries you've got to navigate. That's why I'm reaching out to get some advice on my current setup. Here's what I'm working with right now: 3 x 8TB disks (plus 1 for parity) ZFS. A cache made up of 4 disks. Today, I'm getting my hands on 2 x 1TB NVMe and 3 x 16TB disks. My goal? I want to merge them into an array with 4 x 8TB and 2 x 16TB, throwing in one of those 16TB disks as parity. But here's the kicker: I'm torn between sticking with ZFS or making the leap to BTRFS. What do you think? This setup is going to serve as my automated download station, powered by Docker containers, while also handling regular file services, acting as a backup target, AND doubling as a Plex server for 10 users. Any thoughts or advice would be greatly appreciated!
  5. Hey everyone, I'm Kevin, and I could use some help with Unraid and my Docker Compose file. Everything works smoothly when I'm using Docker Desktop locally, but as soon as I try it on Unraid, I can't connect to MongoDB. Here's my Docker Compose file: version: '3' volumes: mongo1_data: mongo1_config: services: mongo1: image: mongo:7.0.4 command: ['--replSet', 'rs0', '--bind_ip_all', '--port', '27017'] ports: - 27017:27017 healthcheck: test: echo "try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'mongo1:27017'}]}) }" | mongosh --port 27017 --quiet interval: 5s timeout: 30s start_period: 0s start_interval: 1s retries: 30 volumes: - 'mongo1_data:/data/db' - 'mongo1_config:/data/configdb' mongo-express: image: mongo-express restart: always ports: - 8081:8081 environment: ME_CONFIG_MONGODB_URL: mongodb://mongo1:27017/ I've tried a bunch of things like hardcoding the local IP to the port, changing ports, and defining networks, but none of them seem to solve the issue. What's really odd is that when I attempt to access port 27017 through a web browser, I get a message saying: "It looks like you are trying to access MongoDB over HTTP on the native driver port." So, it appears that something is listening, but when I try to connect with MongoDB Compass or Mongo Express, it just doesn't work. This problem seems to be related to Unraid because it works perfectly fine locally. Any ideas or suggestions would be greatly appreciated!
  6. Sorry, I get confused a bit... Does this mean movement is still slow even if I move folders and files inside of the same share/dataset?
  7. I can move it inside of the share and then rename it. So, in the end, this should be fast if this happens on the same disk and in the same share?
  8. Well, I want to reorganise the share... also move files and root folders to subfolders (inside the disk)... So there is no chance to get this faster? Usually, a move takes seconds, and I must wait hours to move files inside the same disk. this seems to be the trade-off from zfs...
  9. Yes, turbo write is enabled. So what is your suggestion then? :-) I attached the diags tauri-diagnostics-20230822-1503.zip
  10. all of them are zfs. do you need other informations?
  11. Hi All I have a strange issue. I want to migrate my shares and folders to a new structure. The new and old claim has the same configuration: All my files and folders are spread over different disks, which is ok. But if I now want to move the files inside of one disk with the command: mv /mnt/disk1/series/* /mnt/disk1/data/media/tv/ It takes ages to complete. Then I looked on the dashboard and saw I had read on all disks... Typically a movement inside the same disk should be done in some seconds, but in my case, it will be read from all disks. How can I fix this? It looks like it will copy the data from other disks.
  12. Hello everyone! My friend and I are planning to build an Unraid server. I've been using Unraid for 5 years now, and my friend has tested several other NAS software options but hasn't been satisfied so far 😊 Our joint plan is to create a new server primarily for Plex streaming, Nextcloud and experimenting with VMs. The VMs will serve as a testing ground for various experiments. In addition to Plex, we intend to utilize Nextcloud to host our personal "cloud" storage and store backups on this Unraid server. We're aiming to dockerize about 95% of the software. Initially, we had planned to build a system from scratch using these components: - CPU: Intel Core i5-7640X X-Series - RAM: 8 x Kingston KVR32N22D8/16 - Mainboard: AsRock Mainboard X299 WS/IPMI - Cache/VM Controller: Icy Box PCI Card IcyBox M.2 PCIe SSD - Case: Fantec SRC-2612X07-12G, 2HE 680mm Storage Case without PSU - PSU: Fantec NT-2U40E - 400 W However, after some research, I've found an alternative solution that doesn't require buying separate components and assembling them. The new option is a prebuilt system that was originally used for TrueNAS but will be used with Unraid. I believe that if it's suitable for TrueNAS, it should work well with Unraid too. I've briefly reviewed the part list and haven't identified any issues. Here are the specifications of the prebuilt system: - Storage Server: Supermicro CSE-829U X10DRU-i+ 19" 2U with 12x 3.5" LFF DDR4 ECC, RAID, 4x 10GbE X540, 2x PSU - CPUs: 2 x Intel Xeon E5-2620V3 SR207 6C Server Processors - RAM: 128GB Registered ECC DDR4 SDRAM (8x 16GB DIMM) - Main Storage Adapter - SAS SATA NVMe Controller: LSI SAS9300-8i 9300-8i 9311-8i PCIe x8 with 2x SFF-8643 12G - Cache/VM Controller: 1 x Intel HP SSD M.2 6G SATA Storage Controller - PSUs: 2 x Supermicro 1000W PSU PWS-1K02A-1R Important Information: - The server will be hosted in a Datacenter, so rackmount capability is essential. - The optimal case is at least one PCIe 3.0 (x16) Full Profile port (for the GPU). - The server will serve around 15 people, necessitating a GPU for Plex transcoding. Feel free to provide feedback on this revised plan!
  13. Hi All I am stranded with my idea to use a ZFS pool for max throughput. I have learned that I cannot easily extend the pool with one disk increments. I would appreciate your guidance now on how I should set up my NAS. The goal should be to get the max read and write speeds with these components: 1 x AsRock X570M Pro4 with 48 GB RAM 2 x kingston 500GB nvme 2 x WD Blue 500 GB SSD 4 x Seagate Exos 7E10 8TB HDD The max throughput is because we will move into a house with 10gbe infrastructure. This means the internal network is capable of handling 10gbe. Also, the ISP can handle this speed. So I have the perfect base to set up a NAS with good throughput. So my goal should be to get the max throughput with these components. The initial plan was: 1 pool with 2 nvme disks with one spare (used for VM placement) 1 pool with 2 ssd disks with one spare (used for application data, mainly docker things) 1 pool with 4 hdd disks with one spare (as the main data pool) So I hoped to reach around 240 - 300 MB with the hdd pool because of parallel reads and writes to the disks. Something like this may be possible if I think about a new configuration. 1 pool with 2 nvme disks with one spare (used for VM placement and application data) 1 pool with 2 ssd disks with no spare (used as cache) 1 pool with 4 hdd disks with one spare (as the main data pool) So all write will go first to the SSD, which will be fast because it's configured as a cache for the data pool, and later on, it will save to the HDD. But I think the reads will be a "problem" because it keeps the data to individual disks and not spread over all disks. Or am I wrong here? Which configuration would you recommend in my case?
  14. But this works only for this extension… what is in one or two years when i add the next 8tb disk? Then i need to rebuild it again… whats your suggestion in general? is there another format where i can read from multiple disks to reach a 10gb connection?