Leaderboard

Popular Content

Showing content with the highest reputation on 04/30/21 in all areas

  1. Funktioniert auch. Bin von DigitalDevices einfach begeistert und der Support von denen via Github oder Mail ist auch super.
    2 points
  2. Regardless of the status of the drive, you should really set up notifications so that you are alerted of issues as they happen not when you happen to look at the interface. You got "lucky" that the issue only concerned the parity drive and nothing bad happened on another drive. You might lost data the next time.
    2 points
  3. ....parallel Aufnahmen machen oder PiP nutzen. Du kannst einen Multischalter auch noch im WoZi dazwischen schalten...die kann man auch Kaskadieren, wenn es die richtigen sind. Edit: https://de.wikipedia.org/wiki/Unicable ...was zum lesen Gesendet von meinem SM-G960F mit Tapatalk
    2 points
  4. Awesome project! Perhaps I am a bit blind but how do I add additional users? I do not have the "+ Add" option here.
    2 points
  5. .... well not without a ridiculous amount of faffing about swapping back the old router, to say nothing of the earache from my S.O. in the interim :) I'm sure there should be a 'turns pale and shudders at the thought' emoji, but I never quite got the hang of them. Thanks for the guidance - much appreciated.
    1 point
  6. New repository is: vaultwarden/server:latest Change it in docker settings: Stop the container Rename repository to vaultwarden/server Hit Apply and start the container That's it. Don't forget to go to unRAID Settings >> click on Fix Common Problems (if the scan doesn't start automatically then click RESCAN) and you will receive a notification to apply a fix for *.xml file change. I just went through this procedure and can verify everything went smooth and well.
    1 point
  7. You're correct, that is another approach, but it does add some networking complication to the mix, and honestly if that's something you're prepared to deal with, you could do it on the unRAID box, on a separate PC like a Raspberry Pi even, or on the router itself if it's capable. I don't tend to suggest solutions like this because the complexity of a network environment is never evident from this side of the conversation, and if it breaks really badly, you can't come back for help. VPN-per-container will have slightly higher overhead, but honestly with today's internet and the resources available on your average computer, I think that concern is more of a relic of history. If VPN-per-container works for you, that definitely seems like the easiest, lowest-effort, still-secure option to me. Glad I could answer your questions, let us know if we can do anything else.
    1 point
  8. OK, I compared that one with the one from your first post in this thread. Since these are correcting totally different sectors each time the likely cause is RAM. Have you done memtest lately?
    1 point
  9. Sorry, I don't know how to fix that problem. You are inside the mariadb container when executing the command?
    1 point
  10. Ja. Manche mussten für Unraid aber eine kleine GPU wie die GT710 kaufen, damit es fehlerfrei ging. Hängt scheinbar vom Board/BIOS ab. Nein. Die GPU kann nicht von Unraid genutzt werden, wenn sie für eine VM vorgesehen ist, auch wenn die gerade nicht läuft. Man kann aber eine GPU im Wechsel mit mehreren VMs nutzen. Auf die Unraid WebGUI kommst du außerdem jederzeit über einen separaten Rechner wie Laptop oder Tablet. Dafür braucht es keine GPU. Denk dran, dass es sich nicht lohnt die VM herunterzufahren. Der Stromverbrauch der GPU steigt, sobald die VM aus ist.
    1 point
  11. 1) I use my array as if I lose my plots I'd cry its taken so long to make them. But you can use anything really... I had spare drives in my array, so I just set them to spin forever. 2) I dont do this, I got 2 spare Intel SSD's that I use for plotting via unassigned devices I did do 1 or 2 plots via my Cache pool as a test. Just make sure you link it via the container otherwise you'll plot to your docker.img file and it'll blow. 3) I use both unRAID and my gaming PC and just have my gaming PC transfer the plots to my array via a share. Hope this helps.
    1 point
  12. Pulling a disk with the array started will be cause a disk to be disabled and thus emulated.
    1 point
  13. Can someone please explain what's with this new vaultwarden. Bitwarden_rs is now deprecated, so can we just change the repository or we should start from scratch (create a new docker container)?
    1 point
  14. You are correct, I missed a step. Thanks for your help! Seems to be working now.
    1 point
  15. According to the log these are the only two devices with a valid btrfs filesystem, and a different one, so they weren't from a pool: Apr 29 21:00:11 Tower kernel: BTRFS: device fsid bc84f358-49ca-4bfe-abf3-eaed0c9d7c53 devid 1 transid 298261 /dev/sdn1 scanned by udevd (1843) Apr 29 21:00:11 Tower kernel: BTRFS: device fsid 402fa753-15e2-450f-a062-090043d62a5c devid 1 transid 142621 /dev/sdh1 scanned by udevd (1843) You assigned these 3: Apr 29 23:43:10 Tower emhttpd: import 30 cache device: (sdn) PERC_H800_00fed4e00dcc545625005f1746b0ad4b_36a4badb046175f00255654cc0de0d4fe Apr 29 23:43:10 Tower emhttpd: import 31 cache device: (sdu) PERC_H800_00f516110cc70000ff005f1746b0ad4b_36a4badb046175f00ff0000c70c1116f5 Apr 29 23:43:10 Tower emhttpd: import 32 cache device: (sdk) PERC_H800_001dff5c100b8d1d28005f1746b0ad4b_36a4badb046175f00281d8d0b105cff1d So the other two don't have a valid btrfs filesystem, you can try to mount sdn its own with UD, then post new diags if it fails.
    1 point
  16. Hey there @codefaux, Please reach out to us via the support form and include your old and new USB GUIDs (purchase receipt works) and we will take care of you. For the DbA, please include what you would like to change the name to. Thanks
    1 point
  17. Thank you codefaux - yes Surfshark does unlimited logins. What put me off a number of better known VPN options was the restriction to a set number of devices, which my family's gadgets alone would have exceeded, so I double-checked just now anyway. Realising that simple fact and finally 'getting' its importance in the light of what you say, makes me much less concerned. I think I might have picked up on what people were doing to better manage VPN's with a set limit of logins and assumed I was doing something wrong or not 'best practice', when in fact I can use vpn versions of dockers as much as I like. I thought to post this in the general section because I'm vaguely aware of VM's and dockers interacting - for e.g. Ed of Space Invader One fame has a (now rather old) series about installing PFsense in a VM and presumably everything on that server (including all the dockers) are sending and receiving data through that VM, which could also presumably host a single instance of the VPN . . . Thanks again
    1 point
  18. Yeah, that would work, I think there is just one file that needs to be replaced but it is simpler for me to just copy the whole folder over.
    1 point
  19. Habe jetzt den latest: v465.27 installiert GPU Statistics ist auch wieder am laufen.
    1 point
  20. GPU Statistics kannst du wieder drauf geben, das Plugin benötigt entweder das Nvidia oder Intel oder Radeon Plugin damit es läuft und installiert wird. Auf welcher treiber version bist du jetzt?
    1 point
  21. Yes, thats right. I also force all roms and options to UEFI in the bios and VT-d/IOMMU is enabled. I'm using a Windows VM with GVT-d / UPT passthrough so within the VM the emulated video card is enabled as well as the Intel GPU, you just install the OS and then the Intel driver and then you can disable the emulated Standard VGA Controller in device manager afterwards if you want to (Leaving it enabled could also allow native VNC access whilst using the IGPU for transcode for example). To get HDMI or Analogue sound you can passthrough the Audio controller in the usual way. I think this config will work with new Linux kernels within a VM too. Arch tends to be running newer kernel and versions than Unraid so could be different but happy to try to help if people have questions.
    1 point
  22. He, danke für eure Hilfe. Nachdem er nun fertig war, ging auch das Hinzufügen der beiden 1TB-Platten.
    1 point
  23. Ich war gerade dabei zu schreiben.... Die Bootreihenfolge war nicht mehr korrekt. Hier stand einer der HDD als erstes und anschließend der USB stick. Habe nun aktuell die grafikkate ausgebaut und nen Bildschirm dran. Ist soweit korrekt hochgefahren. Werde gleich die karte wieder einbauen und einen neustart versuchen.
    1 point
  24. @ptrang I don't know which driver version this is but there are a few drivers in the plugin that you can choose from, I thought this beta driver is only Windows based...
    1 point
  25. I hate to be the bearer of bad news, but it look like the desktop version of YACReaderLibrary no longer allows you to sync with another YACReaderLiberaryServer. This used to work fine, but I hadn't checked it in a while since I use a tablet to read my comics. It looks like the dev has made it so that only iOS devices can connect to a server. I guess this makes sense in their mind, in that they are assuming that most users will keep their comics on their main computer and then run the server option of YACReaderLibrary in order to connect the YACReader iOS app with that computer. The decreased usability is a shame, but I guess the dev is putting their energy into other areas. PS, you can find more info in the user guide, but as you might expect it's geared towards iOS users: https://ios.yacreader.com/user-guide/ I've also posted in the user forms to check with the dev to see if there is a way around this. I'll report back if anything promising arises.
    1 point
  26. Big thanks to those who helped out in my absence (@creativity404). I went on holiday and was expecting to have a connection to the internet, but I didn't. @Creativity404 Are you still having trouble yourself, or did you get to the bottom of it? Sadly the XMRig data folder is tiny, if you don't want to increase your docker image size, what I can do is install the drivers and use a utility like ncdu to find what locations are suitable to create volumes for. What driver version are you trying to use? Nvidia right? The container is already based on Ubuntu, but the CUDA installation is HUGE which is why I've made it optional. To put it into perspective, the XMRig developers opted to leave out CUDA support in the main release because it the library is like 30mb when built, the 10gb or so you're seeing is quite accurate. I still need to go through and see if there's anything I can strip out of it. It shouldn't redownload every time it restarts no, but it will restart if the docker container is pruned. Can you please share a screenshot of your config and also your logs, I'm happy to take a look too.
    1 point
  27. Was able to delete just fine using MC. All is working properly. Thanks for taking the time to respond and assist! Thank you for helping the community. I guess my issue is I am not a "coder" and understanding it is comparable (to me) like understanding a new language. I am slowly getting the hang of it but it takes time and patience. I very much appreciate your offer for Q&A. I'll def. be thinking of a couple good ones; however, even that is difficult to me. I often don't know what the questions are until I run into an issue. I extensively try to research (via reddit and unraid forums among other avenues--youtube) and ultimately I think I figure it out but I struggle to understand what would be good to know at the surface. I feel I am tech savvy but command line prompt is not something I am all too familiar with and will take a lot of practice for me to understand (ie. i see people using sudo commands a lot but don't even know what it really means). Rather than have you explain that kind of stuff, I need to dive into it myself first to not waste anyone's time. I think my issue is I am trying to be a "home network administrator" but that's on top of a very demanding job and family life so it makes my ability to find time to learn very difficult. I will certainly "bookmark" your name and as I think of some good solid questions, I'll shoot your way. I very much thank you for offering to assist. As to answer your questions -- you are correct I am still not completely sure how things work. I am vaguely familiar with command line operations. I don't know much of anything about Linux and SSH except that I can use SSH to access my server (I have PuTTy) and how to get to MC.
    1 point
  28. Hello @minhquan07, first, if you do not do inconsiderate actions, your data should be safe. Do you have a recent array health report ? Your disk assignement is included : In any case, I would wait for a more experienced user to give your proper advice for the next steps. Considering that you had disabled drives, your diagnostics might help to guide you from there, please attach the zip to your next post. (Tools / Diagnostics)
    1 point
  29. It IS your disk assignments. Without that file Unraid doesn't know which disk is assigned to which slot.
    1 point
  30. I have no idea what interface that is. Here is the setting in the unRAID web UI: This will change display of CPU, motherboard, HDD, SSD, NVMe, etc. temperatures in the unRAID GUI. Scrutiny is not part of unRAID so I cannot tell you how to change anything there,
    1 point
  31. I believe that you are correct. It's just strange because when i first set up unraid and loaded the console the font was fine. I only noticed this issue after i installed the nvidia driver plugin. Perhaps it was just a coincidence. But thank you for that post! Yeah, i know. I thought i read at first that docker would not do the GPU passthrough, so i was going to launch a VM to run plex. I was looking at binhex's plex container and saw that it is infact possible so i started following his faq to enabling it which led me to the video plugin. I read somewhere that you can't use gpu passthrough for Docker and VM's? What do people do to reconcile that? Say if they want to use a VM for gaming and run plex with hardware transcoding? I did NOT see the second post in this thread, but i went back and reviewed it. I missed the step on binhex's guide to add the extra parameters --runtime=nvidia. I am transcoding like a boss now, thank you so much!
    1 point
  32. Hi again. Only thing I have to say about this is, if you use vpn-included containers, each container will be a separate login to your VPN. Before you deploy, make sure your VPN is okay with multiple logins. Another thing to consider is trying to find a Reverse Proxy + VPN container, and using that as the "head" for your sub-containers. It may be easiest. A Reverse Proxy, in case you aren't familiar, provides a webserver that forwards requests to other services, typically based on the URL structure. So, for example, Sonarr would be at http://whatever/sonarr and nzbget would be at http://whatever/nzbget Though for future reference, there is a dedicated sub-forum for Docker-specific inquiries where you might draw different and/or more appropriate attention. https://forums.unraid.net/forum/47-docker-containers/
    1 point
  33. Nice touch with the two branchs on last plugin update Thanks!
    1 point
  34. @Slaytanic The Hardware info can get kind of bloated on VMs. You can go to integrations page then search for ZHA. Click like you are going to install then the first step before install asks you to select your zigbee device from a drop down. This integration automatically finds your devices so it should be a short list. Then take that usb address Ex. ACM0 and setup zigbee2mqtt or continue with ZHA. Edit 1- those cc2531 are notorious for coming unflashed. If you still don't see it in the ZHA drop down then look up flashing guides.
    1 point
  35. Damit es keinen Ärger mit der AppleID gibt muss man einiges tun. Die standard MacinaBox installation nutzt eine EFI mit einer config.plist in der als SMBIOS iMacPro1,1 referenziert ist. Allerdings mit einer Standard Seriennummer. Die nutzt vermutlich jeder der nicht weiss das diese Standard Seriennummer zu Problemen mit der AppleID führt. Was muss man also tun. Im Prinzip nur das was spaceinvader in seinem YouTube Video beschreibt, sprich: - VM zum laufen bringen aber noch nicht mit AppleID verbinden! - OpenCore Konfigurator runterladen (Es muss die vErsion für OpenCore 0.6.4 sein! Sonst geht das schief!, ich glaube 2.19) - Mit dem OpenCore Konfigurator die EFI des OpenCore Images von Spaceinvader mounten damit man zugriff auf den EFI Folder hat - Diesen EFI Folder und vars auf den Desktop kopieren - Mit dem OpenCore Konfigurator die im EFI Folder auf dem Desktop gespeicherte config.plist öffnen - Dann wie im Video von Spaceinvader beschrieben ein neues SMBIOS generieren (aber bitte nicht MacPro sondern iMacPro1,1). Dieses SMBIOS ist teil der config.plist so das diese dahingehend abgeändert wird das eine jungfräuliche Seriennummer (Board uuid etc.) darein geschrieben wird. . Danach via OpenCore Konfigurator die OpenCore EFI unmounten und die vDisk mounten - In die vDisk den EFI Folder und das vars vom Desktop reinkopieren - unmounten und VM herunterfahren - In der Konfiguration der VM den Pfad zu der VM (3. Pfad) kopieren und im 1. Pfad einfügen. Pfad 2. und 3. kann man daraufhin entfernen Voila, beim nächsten Boot sollte die VM booten und die generierte Seriennummern in "About my Mac" und der Mactype iMacPro sichtbar sein
    1 point
  36. The driver is already built and ready for download, please also update the plugin in the first place.
    1 point
  37. Since we don't have diagnostics (maybe I should have asked) we don't know exactly what happened when you pulled the drive. It isn't clear that Unraid didn't recognize the drive, just that it disabled the drive. Unraid disables a disk when a write to it fails. This is because the disk is no longer in sync with the array. After a disk is disabled it won't be used again until rebuilt, which makes it in sync again. After the disk is disabled, all subsequent access for that disk is using the "emulated disk". Any reads of that disk actually read all other disks instead and get its data from the parity calculation. And any writes to that disk, including the initial failed write, are emulated by updating parity, so all those writes can be recovered by rebuilding. If you post your diagnostics we might be able to see more about your configuration and what happened. The following is the "long form" of my Diagnostics request: If possible before rebooting and preferably with the array started Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
    1 point
  38. Unfortunately, the speed of reporting on issues like these -- and the infrastructure to respond to each controller's individual responses -- is not in place right now. This isn't an unRAID thing, it's just A Thing(tm) unfortunately. Your move from motherboard-SATA to HBA-SAS *shouldn't* be all that difficult, but I can help prepare for that to at least some degree, because I've gone through it a few times, and I'm still told I have an Unapproved Setup. Frowny-face and all, I bet. First and most important is to make sure you have SMART data, because with SMART data (aside from the hady thing about knowing before a drive chokes to death) we have Identification. With Identification and Preparation, things tend to go...well, less bad, haha. Do you have good SMART data in the WebUI? Do you have any idea what I'm talking about? If not, pop over here for a primer on how to know what you don't know, then come back here and tell me what it is you don't know and I'll try to do what I can from there. I'm not gonna know every interface, but I'll work with you as far as I can.
    1 point
  39. If you're looking for some more in-depth answers to how unRAID works, how to find your data, how to manage it etc, I'd be more than willing to give you a few solid answers in exchange for a few solidly laid out questions. In my experience, there is nothing better to get Down And Dirty with the internals than to fire up ssh, log in as root, and start working. I'll admit I kinda skimmed most of the middle bits, but I saw your problem and your solution and that you're still not sure how things really work? Are you familiar with Windows command line operation? Same for Linux? Do you know anything about SSH? I'm more than happy to fill in some gaps, and walk you through it, if you're willing to put effort into specificity in questions, and admitting when you don't know what I've just said.
    1 point
  40. You would have to create a specific path in the Krusader template, I think it was explained in one of SpaceInvaderOne videos a while back. And I wonder if the next video will not cover that in his new series of basic video in the context of 6.9. Anyway, you should get better answers in the specific support topic for the Krusader release you are using (there are 3 different repo for it). If it is empty or if the content is not important to you, yes.
    1 point
  41. Nicht pausieren, sondern abbrechen
    1 point
  42. Der Parity sync macht ja im Prinzip nichts anders als die Parität von allen Festplatten auszurechnen und auf die Paritätsplatte zu speichern, wenn du aber unter dem Paritysync eine Platte hinzufügst würde die ja zuerst Formatiert gehören und dann fehlt trotzdem die Berechnung der Parität für die neuen Festplatten für die bereits bisher durchgeführte Paritätsberechnung der bereits bestehenden Platten. Hoffe das ergibt Sinn...
    1 point
  43. Tried Same error. Pretty sure I will need that for Unminable as i'm not really mining DOGE. Is there a way to find/show any log files so I can see what happens when the xmrig is run? Looking through docker.log and see this error: time="2021-04-28T16:21:43.848677945-07:00" level=error msg="Handler for POST /v1.37/containers/8fb45d098f31/start returned error: error gathering device information while adding custom device \"/dev/dri\": no such file or directory" Update. Getting somewhere now. Thanks for pointing in right direction. I can start the docker now so I can at least get past the error and now i can tweak the parameters. (Had to set the GPU device to /dev/null as I am only using CPU anyways).
    1 point
  44. Try getting rid of "DOGE:" in your wallet adress
    1 point
  45. Do you understand your command line? also it is a windows command line so you can't just paste it into a terminal. edit the docker: basically you are going to edit the pool, your wallet, and the rest goes into additional xmrig arguments. Look here for the command line options and familiarize yourself with what your command line actually does: https://xmrig.com/docs/miner/command-line-options You should then understand what to put into additional arguments
    1 point
  46. Update but without better/additional info: So after 3 days(usually just 1) of running without fail the problem has occurred again. I notice the problem has occurred because there will be 2-3 hyperthreads stuck on 100% I have been unable to try anything because this time the whole server became unresponsive since the thread of core 0 became "stuck". I could still see the dashboard since it was already open but i couldnt view any other page or open the web terminal. I went and looked at the physical machine but it just showed a black screen as output. I will troubleshoot/diagnose at the next opportunity.
    1 point
  47. It doesn't seem to work with the unRAID remote beta Once I updated my DNS it was super unwilling to connect. Which... beta, but still, fyi. I'll upload some logs and jazz later, for now, it's time for some Snyder Cut and, I imagine, a brief morning period for what's left of my childhood. (ok, ok, judgement reserved... for now)
    1 point
  48. Yeah, but in my case I also need to know what driver I did use and that's why I needed the network config. But thanks anyway. Anyhow after another server crash (🤮), fix the most likely cause of the error, recover the data again and lastly fixed mayor issues with some containers I finally had time to reconfigure the networking. It was a bit of a pain, but if this happens again I have now at least metadata backup of the config thanks to this simple scheduled user script I run every night: #!/bin/bash BACKUP_PATH=/mnt/user/backup/system/metadata docker network ls > "${BACKUP_PATH}/docker_network.txt" docker images --digests > "${BACKUP_PATH}/docker_images.txt" docker ps > "${BACKUP_PATH}/docker_ps.txt" Besides the networking info, I also added other useful info that can come in handy such as be able to fetch specific image version instead of try to fix the latest image.
    1 point
  49. For anyone wondering the answer is yes! I had to edit my let's encrypt config and made "blueiris" a sub domain. As soon as I changed that it started working immediately. I was also able to close my stunnel port forwarding rule in my router! Let's Encrypt is pretty cool stuff. 😎 server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name blueiris.random.server.name.org; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { include /config/nginx/proxy.conf; proxy_pass https://192.168.1.100:8777; # NOTE: Port 8777 is the stunnel port number and not the blue iris http port number } }
    1 point