Jump to content

FreakyUnraid

Members
  • Posts

    21
  • Joined

Posts posted by FreakyUnraid

  1. 10 hours ago, Vr2Io said:

    You won't got benefit change to 9207, because it still 6gb HBA same as M1015, bottleneck not on PCIe2.0. You need concurrent upgrade PCIe3.0 and HBA and Expander to 12Gb.

     

    Are you sure about that? These 2,5" Seagates only do about 135MBps max. 

     

    HBA has 8x 6 Gbps

    With 1 link to the expander; that's 4x 6 Gbps = 24 Gb

    With max 20 drives that comes to 24/20 = 1,2 Gb per drive = 1,2 / 8 = 150MBps

     

    PCIe 2.0 does 500 MBps per lane. HBA is x8 with 2 ports which comes to x4 per port?

    So 500 MBps x 4 = 2000 MBps / 20 drives = 100 MBps max theoretical. In practice this will be lower is my guess. 

    So PCIe 2.0 is the bottleneck here.

     

    Changing out the PCIe 2.0 card for a PCIe 3.0 card result in:

     

    PCIe 3.0 does 985MBps per lane x 4 = 3940 MBps / 20 drives = 197 MBps

    I read that in practice this is more like 3200 MBps / 20 drives = 160 MBps

    Still well over the max these drives can do. Around 20% headroom.

     

    If, in the future, I would switch to 3,5" drives which can do much higher speeds I will also need fewer drives. So I think I'm good when I switch to something like a 9207-8i, right?

     

    10 hours ago, Vr2Io said:

    For 2.5 vs 3.5, this also apple and orange, 3.5 easy got high capacity CMR drive. If in terms of cost and power usage, it will endup no benefit using 2.5.

     

    For example, three 5TB 2.5 vs one 16TB 3.5 disk, 3.5 would all win.

     

    PS : Last mount I buy 18TB disk just 257usd, it even cheaper then shuck disk a lot. This only need one port instead 3 or 4 port.

     

    The drawback also as your mention, hard for cooling and noise ( cooling fan not disk itself ) and much longer parity check time.

     

    My current config are 9300-4i4e, 4i connect expander to 16disk and 4e connect expander to 12disk. All expander are 12Gb. This setup still have room to add one more expander for more disks.

     

    The 2,5" drives need more HBA/Expanders so the power consumption difference between a 3,5" setup is indeed negligible at this point. I just really like these little drives for being so quiet. Even in 100% operation (all drives) I can barely hear it. I remember my old Synology with 4x 3,5" drives which was in the same spot the current server is, and the noise was really unbearable.

     

    The power consumption 'demand' was for the new hardware. Power is getting expensive so I don't want a HBA or Expander that consumes a ton of power when there are (far) better and more efficient once out there. 

     

     

     

     

  2. Hi,

     

    Currently running a fujitsu D3643-H (4x sata) with 2x IBM M1015 (2x 8 sata). All ports are populated with 2,5" drives. With 20 drives I'm at capacity and I need more storage. No more ports left, so looking for a good solution to expand the number of ports. Less money and less power consumption is much better.

     

    Options I considered but deemed not a good option:

    - Changing HDDs for larger ones. Already running 5TB drives, so can't go bigger in 2,5". Going 3,5" would mean I have to change out at least 3 drives (I have dual parity) to go bigger. With at least 16TB being the best option this would set me back at least $800. Too expensive and also 3,5" make too much noise for my taste. Server is located in a room I often work and sit in so it's a big deal. The 2,5" are so quiet, I can't even hear them. And I have a couple spares laying around.

    - Changing 1x M1015 for a 16 port HBA. Again, this would cost a lot. You can get one for around $170 on ebay, but I would like to get 2 so I have a spare laying around just in case. Ebay and shipping can take weeks if not months and I don't want my server to be down so long. With $340 this would be too costly in my opinion.

     

    The only real option I found was to change out 1x M1015 for an expander. Looking around the Intel RES2SV240 looks to be the best option? For around $60 not a bad deal? The Lenovo is real cheap at just $30 but I think I can't get enough ports with that option? 

     

    After reading the performance topic on throughput I'm a bit worried that the PCIe 2.0 M1015 is going to bottleneck my drives quite a bit. They start parity at around 135MBps. With (in theory) 20 drives on the expander this would mean they would be bottlenecked to around 113MBps if my math is correct? Could this be damaging in some other way than just parity taking a bit longer? Perhaps I should change out the M1015 for a PCIe 3.0 card like the 9207-8i and sell my M1015's? 

     

    So in short:

     

    NOW

    2x M1015

     

    Option 1

    1x M1015 (PCIe 2.0 bottleneck with 20 drives on expander?)

    1x Intel RES2SV240

     

    Option 2

    1x 9207-8i PCIe 3.0 (Dell or a different one?)

    1x Intel RES2SV240

     

    Option 3

    ? Suggestions are welcome.

     

     

    Would love to hear your thoughts on this.

     

    Thanks.

     

  3. For people experiencing the same issue: delugevpn is running but the Web UI is not available.

     

    Edit the container and check that LAN_NETWORK is set to your local LAN IP (192.168.1.0/24 for example). Mine was set to 'localhost', which resulted in DelugeVPN and all dockers routed through it not being accessible. All Web UI's were unreachable.

     

    I'm not sure how this happened, because until yesterday everything was working just fine. And I followed Space Invaderone's video when setting things up and I re-watched that again and he also put's in the LAN IP. So I have no idea where 'localhost' came from...

     

    Could an update of the container have caused this?

  4. On 1/24/2022 at 4:07 PM, wsd0823 said:

    Are you using a reverse proxy?   I use SWAG and I had the same issue (but at a 2GB limit) until I updated nextcloud.subdomain.conf changing:

    
    #       proxy_max_temp_file_size 2048m;   # Default
            proxy_max_temp_file_size 0; 

     

    If you're using the NGINX proxy manager you may want to read this.  It mentions a similar issue that was resolved by changing the same parameter:

    https://www.reddit.com/r/NextCloud/comments/li7fvh/big_files_download_problem/

     

     

    Yes, Swag with Cloudflare. I already found that solution online and tried it. No luck. Also played around with timeout settings, set chunk size to 50MB, nothing. Only Webdav was giving me issues... 

     

    Webdav upload with an android app; Contacted the developer and he looked into it. He built an option to force the app to set chunksize to 50MB. And that worked! So apparently a normal webdav connection doesn't do chunking on it's own. Atleast it doesn't listen to the server, and that's why it fails with a reverse proxy with Cloudflare.

     

    Nextcloud only mentions this: https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/chunking.html

    Yeah, how am I, a simple user, suppose to use that with Webdav? Tried both adresses in Windows and they work, but keep hitting that 100MB upload limit. So no chunking it seems...

     

    Perhaps similar to this: https://github.com/nextcloud/server/issues/4109 and a missing feature? Or I'm missing something.

  5. On 12/12/2020 at 2:54 PM, skois said:

    Today i did some further investigation.
    When i uploaded a file and gave me the 504 error i open dev tools on chrome and noticed the url path. included the "dav"
    So i mapped in Windows a network driver with webdav. Tried to upload 1gb file, it copied the file almost all. and it stuck at 99% (it was then uploading it to server) So after waiting for about 1 minute i got an error saying i should check my connection and try again. I though it was a random error because 1 min to upload 1gb with 20mbps is too fast. Tried again. about a minute in again. error. Tried 4gb, same time. With some math 20mbps in one minute can upload a little more than 100MB.
    Then it hit me. 
    Cloudflare proxy free tier have 100mb limitation on uploads (that also is a hint that webdav does not do any chunking.) 
    So i set my nextcloud CNAME on cloudflare to DNS only instead of proxy. 
    BOOM files start to upload correctly. Problem solved right? Nop.
    There is a new limit now. I can upload with mapped webdav drive until 2gb. (2gb +1 byte failes instantly). (This needs further investigation)
    So now was the time to test file uploads from WebGUI. 2GBs (exactly) uploaded succesfully!
    2.1GB also succesfully! (Didn't expect that) Now trying 4GB file.

    *EDIT1*

    4GB returned the 504Error. I'll start looking on the reverse proxy nginx timeout configs..
    *EDIT2*

    Changed some timeouts to 15min on NPM and now i got "Error when assembling chunks, status code 524" instead of 504. 
    ill try again with some huge time outs like 1day and see what happens.

    *EDIT3*

    I use NginxProxyManager, after adding my proxy host i go to /mnt/user/appdata/NginxProxyManager/nginx/proxy_host/numberoftheNChost.conf 
    i copied the whole "location / " block and then i edited again the proxy host though webui (npm webui), Advanced tab.
    Pasted the location /  block and added the following lines.

    proxy_connect_timeout 1d;
    proxy_send_timeout 1d;
    proxy_read_timeout 1d;
    send_timeout 1d;

    Anywhere in the block doesn't matter.
    After that a 5gb file upload is completed successfully. 
    When i had it at 15min it didn't work, probably because the whole upload took almost an hour. 
    I don't upload usually that large files through web gui but its nice to know that if i need to it will work ok.

    Also there is open issue on nginxproxymanager github that someone asks to add the feature to edit the timeout from withing the gui. So we might see it there soon.

    I think for now this is where my quest ends :)

    *EDIT4* 

    The above config helps also on the updater! No longer times out! (Just updated to NC 21 Beta1 on my test server. NC21 feels a bit faster!


     

    BUT even if it succeeds it does not make ANY sense.

    Cloudflare shouldn't block this upload though webgui because of the chunking. If i'm not mistaken default chunking size is 10mb.
    Actually the upload was never blocked just failed assembling (but not!) If you wait a minute and refresh the page. The file is uploaded correctly and playable.
    This might be a timeout setting.

     

    I'll edit the last part later when file upload completes and if i have more findings

     

    Did you find a way to get around the 100MB upload limit when proxying Nextcloud through Cloudflare? 

     

    When I proxy Nextcloud through Cloudlfare the upload only becomes problematic when using a WebDAV connection. Uploads through the PC client and website interface all work just fine. It's like webdav doesn't use chunked uploads?

     

    So I found this: https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/chunking.html

    WebDAV address mentioned there: 

    https://server/remote.php/dav/uploads/<userid>

    Fails with "403 Forbidden" message

     

    "normal" WebDAV address stated in Nextcloud:

    https://server/remote.php/dav/files/<userid>

    Fails because of connection is closed by Cloudflare because of the 100MB limit

     

     

     

     

     

     

     

  6. 6 hours ago, JorgeB said:

    And in case I wasn't clear, that's what happened to you, you had issues with 4 disks, only 2 got disable because you have dual parity, with single parity just one would get disabled.

     

    Okay, but why do they get disabled? Because the two other disks had read errors too, but came back online after the reboot. I don't understand what made disk 2 and 3 different? And why does Unraid (seemingly always?) disable disks in such a scenario. Is it something preemptive? And what is it preventing when disabling those disks?

  7. 15 minutes ago, JorgeB said:

    No, I would wait for the next scheduled one.

     

    Great, back to normal operation it is. Really, really appreciate the help! Thank you!

     

    15 minutes ago, JorgeB said:

    Unraid will only disable one disk with single parity, two disks with dual parity, if there are errors in more disks due to for example a controller issue you just need to fix the issue and reboot/power back on, the disabled disk(s) will need to be rebuilt, like you had to do, the other ones will recover immediately after boot.

     

     

    I'm not sure if I understand this correctly. Are you saying Unraid will never disable more than 2 disks (dual parity)? How does that work? (if there is a wiki about this a link will suffice of course).

  8. @JorgeB  @trurl

     

    (sorry, somehow pressing enter posted right away...)

     

    Success! What a relief.

     

    Disk 2 returned to normal operation
    Disk 3 returned to normal operation 
    Parity sync / Data rebuild finished - Finding 0 errors 
    Duration: 13 hours, 44 minutes, 39 seconds. Average speed: 101.1 MB/sec

     

    Don't think another parity check is necessary, right?

     

    Lessons learned: never letting the server go to sleep again when using an LSI card, that's for sure haha.

     

    But I still wonder, how does Unraid handle a failing LSI card? I was really lucky this time for having dual parity. But my other LSI card has 8 drives connected to it... I hate to think what would have happend if that one had failed. Because what are the odds of "just" 2 drives getting disabled in such a case? RIP array? Or how does Unraid handle this? I know from the past that when you have 'boot array at startup' enabled AND a faulty cable you can be sure that combination results in a disabled disk and thus a rebuild. For that reason alone I disabled array boot at startup a while back.

  9. 5 hours ago, trurl said:

    But don't leave things as they are for too long. I would probably skip the preclears on the new disks, for example. Better if you don't use your server until you are ready to rebuild.

     

    Of course. Whenever something like this happens I just disable all services. Too bad for my plex users, but better safe than sorry. I am skipping the pre-clears, because both drives have already been pre-cleared about a month ago. Thankfully I bought some extra on blackfriday. So yeah, no need to pre-clear them again. 

     

    5 hours ago, trurl said:

    It is always safer to rebuild to spares if you have them.

     

    Thanks, I just shucked 2 drives and will be replacing them tonight so the server can rebuilt over night and during the rest of the day. 

     

    So to sum things up (don't want to screw this up); I can follow "replacing failed/disabled disk(s)" section from here https://wiki.unraid.net/Manual/Storage_Management#Replacing_disks

     

    To translate that to my situation, and just to be 100% sure that what I'm going to do is the right way:

    1. Stop the array.
    2. Power down the unit.
    3. Replace disk 2 and 3 with the spares.
    4. Double check that all cables are connected properly.
    5. Power up the unit.
    6. Assign the spares to disk 2 and 3 spots.
    7. Click the checkbox that says Yes I want to do this.
    8. Click the checkbox Maintenance mode
    9. Click Start.
    10. Click Sync to trigger the rebuild.
    11. Fingers crossed and report back with any problems or success ;) 

    Maintenance mode seems like the safest option to me. 

     

    Can you confirm that these are the right steps? I'm not missing anything?

     

    EDIT:

    Successfully replaced disk 2 and 3 and the array is now being rebuild. See you in ~14 hours, hopefully with some good news :)

  10. 7 minutes ago, JorgeB said:

    Yes, that was inevitable, both emulate disks ae mounting, so you can rebuild on top (with dual parity you can rebuild both at the same time):

     

    https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself

     

     

     

    Okay, and because the disks are mounting there is no need to check the filesystem, correct?

     

    I have done a rebuilt in the past (probably also caused by this sleep issue), but never 2 drives at the same time. Is there more risk involved when rebuilding 2 drives at the same time?

     

    I mean, both drives are still connected to the same cable and LSI card. Are you sure this was caused by sleep mode and causing (temporarily) issues with the LSI card?

     

    Would it be wise to swap disk 2 and 3 with new (pre-cleared) drives? Then start the rebuild with the new drives. In case something does go wrong with the rebuild, I still have disk 2 and 3 laying around so I can recover the data by just copying them to the new disks. Or am I just thinking too hard and should I do as you say, rebuild over the existing drives?

  11. 34 minutes ago, JorgeB said:

    LSI didn't like waking up, there's even a driver crash besides the many timeout errors, you should reboot first, that will clear the errors on the two still enable disks and the LSI issue, then start array to see if the emulated disks are mounting and post new diags.

     

    Is that a 'thing', that LSI cards don't like sleep? Didn't realize that, but after reading your comment I googled some and there a quite some topics where people mention about LSI cards "server parts are not meant for sleep mode" etc. Hadn't even thought about this for a second... dumbdumbdumb.

     

    So to sum things up and see if I understand you correctly:

    1. reboot
    2. start array - in maintenance mode i presume?
    3. check everything
    4. download diagnostics

    Correct?

  12. Server has been running 24/7 for a long time now. Without any (real) issues. I replaced my motherboard, cpu and cache drive last week. No issues. Yesterday I thought, why not save a few bucks on power and put the server to sleep at night. Used my old settings in the sleep plugin (used it before, without any problems) and enabled the plugin.

     

    Today I'm waking up to a nightmare, thinking the plugin somehow (almost) destroyed my array. Thankfully I have 2 parity drives, but still...

     

    Server was set to wake at 07:00 hours.

    After ~30 minutes of that time disk 3 was disabled.

    After 1:40 hours disk 2 was disabled

    These are the messages I received:

    Quote

    Unraid Server, [19/01/2022 07:26] Server-UR: Alert [SERVER-UR] - Disk 3 in error state (disk dsbl)
    ST5000LM000-2AN170_WCJ2NMBJ (sdh)

    Unraid Server, [19/01/2022 07:27] Server-UR: Warning [SERVER-UR] - array has errors
    Array has 1 disk with read errors

    Unraid Server, [19/01/2022 08:40] Server-UR: Alert [SERVER-UR] - Disk 2 in error state (disk dsbl)
    ST5000LM000-2AN170_WCJ2DNLC (sdg)

    Unraid Server, [19/01/2022 08:40] Server-UR: Warning [SERVER-UR] - array has errors
    Array has 2 disks with read errors

    Unraid Server, [19/01/2022 09:19] Server-UR: Warning [SERVER-UR] - array has errors
    Array has 4 disks with read errors

     

    So, what I did after seeing all of this:

    - downloaded diagnostics, see attachment 

    - disabled docker

    - disabled the array

    - did a short self test on disk 2 and 3 > both passed

    - Status as of now:

    • Disk 1 - read error 
    • Disk 2 - disabled, emulated - sst says passed
    • Disk 3 - disabled, emulated - sst says passed
    • Disk 4 - read error

     

    Looking for theories what happened:

    - The sleep plugin is the last thing that changed. Attached a screenshot of the sleep setting I used. So either the plugin is not working correctly, I used a bad setting or just random luck. I'm almost 100% sure it's one of the latter ones. Although, I heard it "shutdown" rather 'harsh', or maybe I'm not used to the sound of all drives stopping at the same time when the server goes to sleep.

    - Opened the case and I can rule out a power issue. The 4 drives aren't connected to the same power cable or the end of one.

    - But they are all connected to the same breakout cable and the same LSI card (IBM M1015 > SFF-8087 cable). That's not suspicious at all...

    - It's the only cable connected to that IBM M1015 in the server (I have two of those cards, the other one is full)

    - So

    1. some error with the sleep plugin? Although, the server was set to wake at 07:00 and the errors started (way) later?
    2. the sleep/wakeup resulted in a 
    • faulty cable?
    • faulty IBM M1015 card?

     

    Spare parts (that I know of)

    - IBM M1015 > not yet flashed

    - Multiple drives (same as in array) ready to go > already pre-cleared

    - Not sure about a spare SFF-8087 cable. I know I have at least one laying around, because I replaced that one when a disk was having read errors. Not sure if that was connected to the same IBM card we're talking about now... Maybe be safe and order a new one?

     

    Next steps:

    Honestly, not sure what to do now. First time dealing with such a catastrophic failure. So before I do anything I would like some advice. Buy a new cable and flash the new LSI card to replace all the hardware that could be faulty? And then follow https://wiki.unraid.net/Manual/Storage_Management#Checking_a_File_System for disks 2 and 3? 

     

    What would you guys recommend I do and in which order?

     

     

     

    Edit: changed title from "Sleep plugin almost destroyed my array? 4 disks with read errors, of which 2 are disabled. How to proceed??" to "4 disks with read errors, of which 2 are disabled. How to proceed??" because it's probably not the plugin's fault and in hindsight it reads a little bit sensational.

     

     

     

     

     

    sleep settings.PNG

    server-ur-diagnostics-20220119-0906.zip

  13. Situation:

    Backup & restore app on phone.

    Connected to Nextcloud with a webdav connection

    Restore process get's stuck every time.

     

    error.log file gets filled with the same error over and over again:

    ERROR upstream prematurely closed FastCGI request while reading upstream, client: [ipaddress], server: _, request: "GET [filename] HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[hostname]"

     

    Any ideas?

     

    EDIT:

     

    Seems the above error was just a symptom. Apparently you need to "tweak" Nextcloud as soon as you have more users or upload/download more data? This is what I found and did:

     

    Error in /mnt/user/appdata/swag/log/nginx/error.log
    	ERROR: upstream prematurely closed connection while reading upstream, client
    
    Error in /mnt/user/appdata/nextcloud/log/php/error.log
    	WARNING: [pool www] server reached pm.max_children setting (5), consider raising it
    
    	Added the following to /mnt/user/appdata/nextcloud/php/www2.conf
    
    	pm = dynamic
    	pm.max_children = 120

     

    This resulted in the next(cloud) error. 

     

    Error in /mnt/user/appdata/nextcloud/log/php/error.log
    	WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 6 total children
    	
    	Added the following to /mnt/user/appdata/nextcloud/php/www2.conf
    	
    	pm = dynamic
    	pm.max_children = 120
    	pm.start_servers = 12
    	pm.min_spare_servers = 8
    	pm.max_spare_servers = 16
    	pm.max_requests = 500

     

    This seems to have solved the issues I had with uploading/download large amounts of data. Of course, waiting to see if an error spawned again I noticed this:

     

    Error in /mnt/user/appdata/swag/log/nginx/error.log
    	ERROR: FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: [ipaddress], server: _, request: "GET /admin//config.php

     

    Thought it was a related error, but I got another one from a different IP with "GET /wp-login.php". Strange, because thats wordpress? Found this post claiming it's probably a bot, so unrelated. 

     

     

     

  14. On 12/18/2021 at 11:49 PM, yogy said:

     

    Thanks, but the steps mentioned there are too "expert" for me. How do I do those steps within Unraid? ELI5 please :)

     

    EDIT:

    I fixed it another way. All seems to be working again.

    - Unpacked the latest appdata backup .tar.gz

    - Opened the .tar file

    - Extracted Vaultwarden folder

    - Opened Krusader and deleted the content off the Vaultwarden folder in Appdata

    - Copied the extraxted Vaultwarden backup

    - Started Vaultwarden again

    - All is working again. 

     

    So, did I do this "the right way"? Or the it's stupid, but it works, but still stupid way? :P

     

     

  15. I had to restore an appdata backup. The restore went fine, but now vaultwarden won't start anymore:

     

    [2021-12-18 21:16:03.674][panic][ERROR] thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "database disk image is malformed")': src/db/mod.rs:307
    
    0: vaultwarden::init_logging::{{closure}}
    1: std::panicking::rust_panic_with_hook
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:610:17
    2: std::panicking::begin_panic_handler::{{closure}}
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:502:13
    3: std::sys_common::backtrace::__rust_end_short_backtrace
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/sys_common/backtrace.rs:139:18
    4: rust_begin_unwind
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:498:5
    5: core::panicking::panic_fmt
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/core/src/panicking.rs:106:14
    6: core::result::unwrap_failed
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/core/src/result.rs:1613:5
    7: vaultwarden::util::retry_db
    8: vaultwarden::main
    9: std::sys_common::backtrace::__rust_begin_short_backtrace
    10: std::rt::lang_start::{{closure}}
    11: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/core/src/ops/function.rs:259:13
    std::panicking::try::do_call
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:406:40
    std::panicking::try
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:370:19
    std::panic::catch_unwind
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panic.rs:133:14
    std::rt::lang_start_internal::{{closure}}
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/rt.rs:128:48
    std::panicking::try::do_call
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:406:40
    std::panicking::try
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panicking.rs:370:19
    std::panic::catch_unwind
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/panic.rs:133:14
    std::rt::lang_start_internal
    at rustc/4961b107f204e15b26961eab0685df6be3ab03c6/library/std/src/rt.rs:128:20
    12: main
    13: __libc_start_main
    14: _start

     

    How do I resolve this? 

     

    Thanks in advance!

  16. After 8 years of Synology I'm really considering unraid. I haven't had time to build the new server yet and trial test unraid unfortunately. But I'm really liking the idea of everything being in a docker container because of the low maintenance level. Plus a large and active community that can help out if necessary.

     

    Like I said; I haven't tested unraid yet, so commenting on something that's missing is a bit hard. My real 'negative' thoughts about unraid is the pricing. Where alternatives are mostly free of charge, for a person on a budget, that's the real drawback. Basic wouldn't cut it, so that's free vs $89. But I'm going to try it and give it a try end of this month, really hope it's worth it.

     

×
×
  • Create New...