quei

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by quei

  1. ok thanks. just one last question... i will start now with the movement of the data. The plan was to move disk by disk, which means I'm currently running unbalanced to transfer all files from disk 1 to disk 4. Once disk 1 is empty, I'll reformat it to BTRFS and repeat this process until all disks are on BTRFS. Is this the correct approach for migrating like this? I'm asking because I encountered the message "Unmountable: Unsupported or no file system." This should change once all disks are using BTRFS, correct?
  2. would you recomend something completly different?
  3. Thanks for the feedback! So, if I understand correctly, I should transfer all the disks to a new btrfs array. I believe the easiest approach would be to create a new btrfs array and transfer the data there. Finally, adding the parity at the end would be the last step due to the speed of the data transfer.
  4. Hey everyone! So, I've had some experience with extending ZFS pools in the past, and let me tell you, it's not always a walk in the park. There are definitely some limitations and boundaries you've got to navigate. That's why I'm reaching out to get some advice on my current setup. Here's what I'm working with right now: 3 x 8TB disks (plus 1 for parity) ZFS. A cache made up of 4 disks. Today, I'm getting my hands on 2 x 1TB NVMe and 3 x 16TB disks. My goal? I want to merge them into an array with 4 x 8TB and 2 x 16TB, throwing in one of those 16TB disks as parity. But here's the kicker: I'm torn between sticking with ZFS or making the leap to BTRFS. What do you think? This setup is going to serve as my automated download station, powered by Docker containers, while also handling regular file services, acting as a backup target, AND doubling as a Plex server for 10 users. Any thoughts or advice would be greatly appreciated!
  5. Hey everyone, I'm Kevin, and I could use some help with Unraid and my Docker Compose file. Everything works smoothly when I'm using Docker Desktop locally, but as soon as I try it on Unraid, I can't connect to MongoDB. Here's my Docker Compose file: version: '3' volumes: mongo1_data: mongo1_config: services: mongo1: image: mongo:7.0.4 command: ['--replSet', 'rs0', '--bind_ip_all', '--port', '27017'] ports: - 27017:27017 healthcheck: test: echo "try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'mongo1:27017'}]}) }" | mongosh --port 27017 --quiet interval: 5s timeout: 30s start_period: 0s start_interval: 1s retries: 30 volumes: - 'mongo1_data:/data/db' - 'mongo1_config:/data/configdb' mongo-express: image: mongo-express restart: always ports: - 8081:8081 environment: ME_CONFIG_MONGODB_URL: mongodb://mongo1:27017/ I've tried a bunch of things like hardcoding the local IP to the port, changing ports, and defining networks, but none of them seem to solve the issue. What's really odd is that when I attempt to access port 27017 through a web browser, I get a message saying: "It looks like you are trying to access MongoDB over HTTP on the native driver port." So, it appears that something is listening, but when I try to connect with MongoDB Compass or Mongo Express, it just doesn't work. This problem seems to be related to Unraid because it works perfectly fine locally. Any ideas or suggestions would be greatly appreciated!
  6. Sorry, I get confused a bit... Does this mean movement is still slow even if I move folders and files inside of the same share/dataset?
  7. I can move it inside of the share and then rename it. So, in the end, this should be fast if this happens on the same disk and in the same share?
  8. Well, I want to reorganise the share... also move files and root folders to subfolders (inside the disk)... So there is no chance to get this faster? Usually, a move takes seconds, and I must wait hours to move files inside the same disk. this seems to be the trade-off from zfs...
  9. Yes, turbo write is enabled. So what is your suggestion then? :-) I attached the diags tauri-diagnostics-20230822-1503.zip
  10. all of them are zfs. do you need other informations?
  11. Hi All I have a strange issue. I want to migrate my shares and folders to a new structure. The new and old claim has the same configuration: All my files and folders are spread over different disks, which is ok. But if I now want to move the files inside of one disk with the command: mv /mnt/disk1/series/* /mnt/disk1/data/media/tv/ It takes ages to complete. Then I looked on the dashboard and saw I had read on all disks... Typically a movement inside the same disk should be done in some seconds, but in my case, it will be read from all disks. How can I fix this? It looks like it will copy the data from other disks.
  12. Hello everyone! My friend and I are planning to build an Unraid server. I've been using Unraid for 5 years now, and my friend has tested several other NAS software options but hasn't been satisfied so far 😊 Our joint plan is to create a new server primarily for Plex streaming, Nextcloud and experimenting with VMs. The VMs will serve as a testing ground for various experiments. In addition to Plex, we intend to utilize Nextcloud to host our personal "cloud" storage and store backups on this Unraid server. We're aiming to dockerize about 95% of the software. Initially, we had planned to build a system from scratch using these components: - CPU: Intel Core i5-7640X X-Series - RAM: 8 x Kingston KVR32N22D8/16 - Mainboard: AsRock Mainboard X299 WS/IPMI - Cache/VM Controller: Icy Box PCI Card IcyBox M.2 PCIe SSD - Case: Fantec SRC-2612X07-12G, 2HE 680mm Storage Case without PSU - PSU: Fantec NT-2U40E - 400 W However, after some research, I've found an alternative solution that doesn't require buying separate components and assembling them. The new option is a prebuilt system that was originally used for TrueNAS but will be used with Unraid. I believe that if it's suitable for TrueNAS, it should work well with Unraid too. I've briefly reviewed the part list and haven't identified any issues. Here are the specifications of the prebuilt system: - Storage Server: Supermicro CSE-829U X10DRU-i+ 19" 2U with 12x 3.5" LFF DDR4 ECC, RAID, 4x 10GbE X540, 2x PSU - CPUs: 2 x Intel Xeon E5-2620V3 SR207 6C Server Processors - RAM: 128GB Registered ECC DDR4 SDRAM (8x 16GB DIMM) - Main Storage Adapter - SAS SATA NVMe Controller: LSI SAS9300-8i 9300-8i 9311-8i PCIe x8 with 2x SFF-8643 12G - Cache/VM Controller: 1 x Intel HP SSD M.2 6G SATA Storage Controller - PSUs: 2 x Supermicro 1000W PSU PWS-1K02A-1R Important Information: - The server will be hosted in a Datacenter, so rackmount capability is essential. - The optimal case is at least one PCIe 3.0 (x16) Full Profile port (for the GPU). - The server will serve around 15 people, necessitating a GPU for Plex transcoding. Feel free to provide feedback on this revised plan!
  13. Hi All I am stranded with my idea to use a ZFS pool for max throughput. I have learned that I cannot easily extend the pool with one disk increments. I would appreciate your guidance now on how I should set up my NAS. The goal should be to get the max read and write speeds with these components: 1 x AsRock X570M Pro4 with 48 GB RAM 2 x kingston 500GB nvme 2 x WD Blue 500 GB SSD 4 x Seagate Exos 7E10 8TB HDD The max throughput is because we will move into a house with 10gbe infrastructure. This means the internal network is capable of handling 10gbe. Also, the ISP can handle this speed. So I have the perfect base to set up a NAS with good throughput. So my goal should be to get the max throughput with these components. The initial plan was: 1 pool with 2 nvme disks with one spare (used for VM placement) 1 pool with 2 ssd disks with one spare (used for application data, mainly docker things) 1 pool with 4 hdd disks with one spare (as the main data pool) So I hoped to reach around 240 - 300 MB with the hdd pool because of parallel reads and writes to the disks. Something like this may be possible if I think about a new configuration. 1 pool with 2 nvme disks with one spare (used for VM placement and application data) 1 pool with 2 ssd disks with no spare (used as cache) 1 pool with 4 hdd disks with one spare (as the main data pool) So all write will go first to the SSD, which will be fast because it's configured as a cache for the data pool, and later on, it will save to the HDD. But I think the reads will be a "problem" because it keeps the data to individual disks and not spread over all disks. Or am I wrong here? Which configuration would you recommend in my case?
  14. But this works only for this extension… what is in one or two years when i add the next 8tb disk? Then i need to rebuild it again… whats your suggestion in general? is there another format where i can read from multiple disks to reach a 10gb connection?
  15. so means i cannot simply add an new disk to extend the pool? what would you suggest in my case? how should i set up my environment then? If i am using xfs i will have a performance issue later on becuase its not possible to read or write on multiple disks.
  16. Hi All I deleted my complete setup and started new (Version 6.12.0-rc6) and I want to start with ZFS. My plan is in arround 1 year to switch my network to 10GB so with ZFS i should get a nice performance boost then. I was confused about how to set up my new NAS with ZFS. So I have created a pool and formatted that with ZFS (first printscreen). And now I am struggling to extend this pool... I have no critical data on the drives so i can also start from scratch, but why can I not add a new disk to the pool (error on the second printscreen)? The best way is to delete the complete setup and start new and use the Array section inside the main page, right? Or how should I set up a ZFS raid correctly?
  17. Hi All i have created an new instance and configured plex. Plex connection seems to work but if i try to configure sonarr or radarr it will not work... i see this in the log: [s6-init] making user provided files available at /var/run/s6/etc...exited 0., [s6-init] ensuring user provided files have correct perms...exited 0., [fix-attrs.d] applying ownership & permissions fixes..., [fix-attrs.d] done., [cont-init.d] executing container initialization scripts..., [cont-init.d] 01-envfile: executing... , [cont-init.d] 01-envfile: exited 0., [cont-init.d] 10-adduser: executing... , usermod: no changes, , -------------------------------------, _ (), | | ___ _ __, | | / __| | | / \ , | | \__ \ | | | () |, |_| |___/ |_| \__/, , , Brought to you by linuxserver.io, We gratefully accept donations at:, https://www.linuxserver.io/donate/, -------------------------------------, GID/UID, -------------------------------------, , User uid: 1001, User gid: 1001, -------------------------------------, , [cont-init.d] 10-adduser: exited 0., [cont-init.d] 30-config: executing... , [cont-init.d] 30-config: exited 0., [cont-init.d] 99-custom-scripts: executing... , [custom-init] no custom files found exiting..., [cont-init.d] 99-custom-scripts: exited 0., [cont-init.d] done., [services.d] starting services, [services.d] done., Hello, welcome to Ombi, Valid options are:, Ombi 3.0.4892-master, Copyright (C) 2020 Ombi, , --host (Default: http://*:5000) Set to a semicolon-separated (;) list , of URL prefixes to which the server should respond. For example,, http://localhost:123. Use "*" to indicate that the server should, listen for requests on any IP address or hostname using the , specified port and protocol (for example, http://*:5000). The , protocol (http:// or https://) must be included with each URL. , Supported formats vary between servers., , --storage Storage path, where we save the logs and database, , --baseurl The base URL for reverse proxy scenarios, , --demo Demo mode, you will never need to use this, fuck that fruit , company..., , --help Display this help screen., , --version Display version information., , , , We are running on ,, i, Hosting environment: Production, Content root path: /opt/ombi, Now listening on: http://[::]:3579, Application started. Press Ctrl+C to shut down., fail: Ombi.ErrorHandlingMiddleware[0], Something bad happened, ErrorMiddleware caught this, Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: <. Path '', line 0, position 0., at Newtonsoft.Json.JsonTextReader.ParseValue(), at Newtonsoft.Json.JsonReader.ReadForType(JsonContract contract, Boolean hasConverter), at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent), at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType), at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings), at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings), at Ombi.Api.Api.Request[T](Request request) in C:\projects\requestplex\src\Ombi.Api\Api.cs:line 78, at Ombi.Api.Radarr.RadarrApi.GetProfiles(String apiKey, String baseUrl) in C:\projects\requestplex\src\Ombi.Api.Radarr\RadarrApi.cs:line 28, at Ombi.Controllers.External.RadarrController.GetProfiles(RadarrSettings settings) in C:\projects\requestplex\src\Ombi\Controllers\External\RadarrController.cs:line 40, at lambda_method(Closure , Object ), at Microsoft.AspNetCore.Mvc.Internal.ActionMethodExecutor.AwaitableObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeActionMethodAsync(), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeNextActionFilterAsync(), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Rethrow(ActionExecutedContext context), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeInnerFilterAsync(), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeNextResourceFilter(), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Rethrow(ResourceExecutedContext context), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeFilterPipelineAsync(), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeAsync(), at Microsoft.AspNetCore.Builder.RouterMiddleware.Invoke(HttpContext httpContext), at Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware.Invoke(HttpContext context), at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIIndexMiddleware.Invoke(HttpContext httpContext), at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext), at Microsoft.AspNetCore.Cors.Infrastructure.CorsMiddleware.InvokeCore(HttpContext context), at Ombi.ApiKeyMiddlewear.Invoke(HttpContext context) in C:\projects\requestplex\src\Ombi\Middleware\ApiKeyMiddlewear.cs:line 51, at Ombi.ErrorHandlingMiddleware.Invoke(HttpContext context) in C:\projects\requestplex\src\Ombi\Middleware\ErrorHandlingMiddlewear.cs:line 24, fail: Ombi.ErrorHandlingMiddleware[0], Something bad happened, ErrorMiddleware caught this, Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: <. Path '', line 0, position 0., at Newtonsoft.Json.JsonTextReader.ParseValue(), at Newtonsoft.Json.JsonReader.ReadForType(JsonContract contract, Boolean hasConverter), at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent), at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType), at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings), at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings), at Ombi.Api.Api.Request[T](Request request) in C:\projects\requestplex\src\Ombi.Api\Api.cs:line 78, at Ombi.Api.Sonarr.SonarrApi.GetProfiles(String apiKey, String baseUrl) in C:\projects\requestplex\src\Ombi.Api.Sonarr\SonarrApi.cs:line 26, at Ombi.Controllers.External.SonarrController.GetProfiles(SonarrSettings settings) in C:\projects\requestplex\src\Ombi\Controllers\External\SonarrController.cs:line 40, at lambda_method(Closure , Object ), at Microsoft.AspNetCore.Mvc.Internal.ActionMethodExecutor.AwaitableObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeActionMethodAsync(), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeNextActionFilterAsync(), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Rethrow(ActionExecutedContext context), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted), at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeInnerFilterAsync(), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeNextResourceFilter(), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Rethrow(ResourceExecutedContext context), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeFilterPipelineAsync(), at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeAsync(), at Microsoft.AspNetCore.Builder.RouterMiddleware.Invoke(HttpContext httpContext), at Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware.Invoke(HttpContext context), at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIIndexMiddleware.Invoke(HttpContext httpContext), at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext), at Microsoft.AspNetCore.Cors.Infrastructure.CorsMiddleware.InvokeCore(HttpContext context), at Ombi.ApiKeyMiddlewear.Invoke(HttpContext context) in C:\projects\requestplex\src\Ombi\Middleware\ApiKeyMiddlewear.cs:line 51, at Ombi.ErrorHandlingMiddleware.Invoke(HttpContext context) in C:\projects\requestplex\src\Ombi\Middleware\ErrorHandlingMiddlewear.cs:line 24, well i am confused that it will try to find something in C: because its linux... Can someone please help me?