dan4UR

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by dan4UR

  1. My core is up and running. Also connection from android client is up again. Solution: I had to completely remove the Docker container, and manually remove the "roonserver" folder in appdata. Then I did a complete reinstall with allocations within the docker template. After this I wasn't still able to discover the core neither the android client nor Windows 10 Client. The solution was configure a new core. First was to restore from backup, restart the core and after that I logged in with my credentials and now everything ist up again and running.
  2. After roon told me there is an update, I installed it the internal way and until now I'm not able to login or see my roon core. With started roon docker container I'm not able to see log. When stopped I'm able to see the docker log which says "permission denied". Tryed everything. Removing the old Docker, installed it completely new. Moving old appdata ... What else informations do need? Edit: 27.12.2023 22:18:40 /run.sh: line 39: /app/RoonServer/start.sh: Permission denied 27.12.2023 22:19:40 /run.sh: line 39: /app/RoonServer/start.sh: Permission denied 27.12.2023 22:20:40 /run.sh: line 39: /app/RoonServer/start.sh: Permission denied 27.12.2023 22:21:40 /run.sh: line 39: /app/RoonServer/start.sh: Permission denied This is my Log Edit2: 29.12.2023 17:42:05 00:00:00.000 Info: get lock file path: /tmp/.rnsgem0- 29.12.2023 17:42:05 00:00:00.003 Info: GetLockFile, fd: 34 29.12.2023 17:42:05 00:00:00.003 Info: GetLockFile, res: 0 29.12.2023 17:42:05 00:00:00.003 Trace: Nope, we are the only one running 29.12.2023 17:42:05 Initializing 29.12.2023 17:42:05 Started 29.12.2023 17:42:06 aac_fixed decoder found, checking libavcodec version... 29.12.2023 17:42:06 has mp3float: 1, aac_fixed: 1 29.12.2023 17:42:10 Running newest Log. Still no connection to the Roon Core ...
  3. Just wanted to say Thank You @JorgeB. You are my hero 🥰 System is up and running. Did some reboot now, will update every Plugins and Docker and after that I'll start to upgrade to unraid 6.10.
  4. Had to use it with "-f" Now the drive is still listed under UD. Just add it back to the cache pool, or hit the Format button before?
  5. Thats correct. Is it enough to change the share settings from "Cache: only" to "yes" and start the mover ? I'll also shutdown any docker/VM from the settings tab. Maybe also some CA Backup. Some temporary files are not a big deal to lose them, since I can get them back. But docker instances with all the settings would be horror to me. After I added the freshly wiped NVMe back to the cache pool just setting up the old cache shares to "Only" and starting the mover ?
  6. Hey, back again. Did not have much time the last days, but I think I have partially some good news. Disconnected the NVMe placed under UD and startet my rig. But what to do now? Delete dev1 under Historical Devices ? (So I think when I put it back in again, unraid shouldn't places it under UD again) Stop Array, Set Number of Discs of Cachepool to 1 and backup my Data/move it to the Array ? Or Should I now backup/move my Data to the Array ? I think first I'll wait for @JorgeB to look at my diagnostics if everything is OK. apollon-diagnostics-20220520-1035.zip
  7. That will be hardest part of the operation 🤣
  8. You know what. After my homeoffice session I`ll disconnect it from the mb and give it a try. When it's done, just boot and starting the array or anything else to do ?
  9. Or maybe possible to deactivate the device in BIOS ? Or would it still be visible to unraid ?
  10. Okay, will try it. Not that nice, since the NVMes are stored under a heavy heat spreader @ my Mainborad. But hopefully it will work after that 🤗
  11. And here is the new file, but still the same 🤔 apollon-diagnostics-20220512-0951.zip
  12. And here we go See diags after the reboot apollon-diagnostics-20220511-2141.zip Edit: Or should I also start the array ?
  13. Hey @JorgeB, two diagnostics. First after fresh reboot with array stopped (...-1536.zip) Second with manually starting the array (...-1537.zip) Dashboard Screen: I hope someone could help finding a solution for rescueing oder rebuilding my cache pool. apollon-diagnostics-20220511-1536.zip apollon-diagnostics-20220511-1537.zip
  14. Hello, had some big problems with my machine. I`ve got a Enermax AIO watercooler and this little guy just damaged. Was wondering my server was shut down. Then I started it, and just after booting and going to the Dashboard I've seen my lost Cache NVMe. But it was listed in UD 😀 (Yeah its alive) Then I saw my Cache is: Unmountable: no filesystem. After I realized this, the hole machine was hardly shut down. Went to the cellar and realized "Yes, my Server is offline." Took it, and connected it to a monitor to check the BIOS. Ohh hell. CPU Temp went in Seconds from 30°C to something 109°C and then it shut down. Checked the watercooler, disassembled it, cleaned the CPU and cooler block, applied new thermal paste and did another boot up. Same result. Now I changed to the boxed air cooler and e voila, CPU is chilling at 42°C 😇 But what now to do with my broken Cache pool. Disc 2 is still part of the pool, but disc 1 is positioned at unassigned devices. Shoul I still check if NVMe 1 is broken ? SMART doesn't show any error. Only Information I found was for Dev 1 (Cache Pool NVME 1): How can I restore the whole thing. All my Docker and Download Temp is/was stored in the Cache Pool ... 😪
  15. Thats what I think. I'll shutdown my rig and connect a monitor since its headles. Maybe I'll find some information in the Bios.
  16. Thats the point. I have two identical WD NVMe drives set to a Cache Pool. Running since more than one year. Now Cache 1 drive is no longer detected ...
  17. Hello guys, today I went to the unraid Dashboard I don't know whats going on. Every Monday I'm receiving a array health report and so it did on Monday, 25th. Everything just fine [PASS] Now I noticed, thats Cache 1 NVMe is gone ... and my shares are partially unprotected Whats going on here? Hopefully rebooted my rig, but still the same apollon-diagnostics-20220427-0803.zip
  18. Kleines Update meinerseits. Ich hatte bei zwei RPi das Problem, dass das Skript einfach nicht anlaufen wollte (hierbei handelte es sich um Mediaplayer mit Libreelec die von Haus aus nur einen Nutzer haben und das ist der root Nutzer). Hier ein kleiner Workaround wie es jetzt ohne Probleme ging. Zum einen muss man bevor man das Skript das erste mal ausführt sich im unraid internen Terminal (oben rechts in der Statusleiste) per ssh mit dem Pi verbinden. ssh root@pi-ip Adresse Dies hat zur folge, dass der Host Key Fingerprint generiert wird und somit der Key zur Identifizierung bzw. Verifizierung auf unraid gebildet wird. Das ganze bestätigt man mit yes und gibt im Anschluss das Passwort für den User root ein. Dann sollte man soweit erfolgreich verbunden sein. Danach einfach das Fenster mit zweimaliger Eingabe von exit schließen und beenden. Um jetzt das Backupskript noch lauffähig zu machen bedarf es noch einer kleinen Modifikation vom Skript. An der Stelle: #Backup erstellen sshpass -p ${SSH_PW} ssh ${SSH_USER}@${PI_IP} sudo "dd if=/dev/mmcblk0" | dd of=${BACKUP_PFAD}/${BACKUP_NAME}-${DATUM}.img bs=1MB muss das "sudo" vor dem dd rausgenommen werden, dass dies dann so aussieht: #Backup erstellen sshpass -p ${SSH_PW} ssh ${SSH_USER}@${PI_IP} "dd if=/dev/mmcblk0" | dd of=${BACKUP_PFAD}/${BACKUP_NAME}-${DATUM}.img bs=1MB und siehe da. Das Skript läuft ohne murren durch. Vielleicht kann der ein oder andere Experte vielleicht noch erläutern woran es genau liegt, aber root und sudo scheinen sich nicht zu mögen und das Logfile deutet leider auch nicht darauf hin, dass genau hier der Fehler liegt.
  19. Leider komme ich an einer Stelle nicht weiter, bzw. wird garkein Backup erzeugt. Eine Datei im korrekten Pfad wird zwar angelegt aber eben mit 0kb. Wenn ich das angepasste Skript ausführe erscheint folgendes: Script location: /tmp/user.scripts/tmpScripts/Pihole_Backup/script Note that closing this window will abort the execution of this script 0+0 records in 0+0 records out 0 bytes copied, 3.23965 s, 0.0 kB/s /mnt/user/Backup/Pihole / rm: missing operand Try 'rm --help' for more information. / pishrink.sh v0.1.2 pishrink.sh: Gathering data ... Error: The device /mnt/user/Backup/Pihole/pi_image-20210903.img is so small that it cannot possibly store a file system or partition table. Perhaps you selected the wrong device? pishrink.sh: ERROR occurred in line 281: parted failed with rc 1 pishrink.sh: Possibly invalid image. Run 'parted /mnt/user/Backup/Pihole/pi_image-20210903.img unit B print' manually to investigate ... Mit @sonic6 bereits getestet, ob ich überhaupt Zugriff per ssh habe und JA, habe ich. Sowohl per ssh als auch per sshpass. Sonderzeichen " # " im Passwort schließe ich aus, da ich ja Zugriff bekomme. Evtl. bereitet es ja Probleme im Skript ... Flotte Überprüfung ob ich für diverse Dinge root Rechte bzw. Passwort benötige ergabt: pi@UniPi:~ $ sudo crontab -e no crontab for root - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/nano <---- easiest 2. /usr/bin/vim.tiny 3. /bin/ed Choose 1-3 [1]: (Ich weiss nicht inwieweit das helfen könnte, aber ich Poste es einfach mal) Vielleicht kommt ja einer von euch schlauen Köpfen zur Lösung des Problems.
  20. Hey Guys, maybe someone could help me. I've got an Problem when reading out the values for Free Disk Space / Used Space. Using the SNMP Plugin and in ioBroker also the SNMP Plugin to show me the values of my unraid machine in VIS. I'm using this OID: .1.3.6.1.4.1.2021.9.1.13.1 to look up the disk free space for Disk 1 (also Disk7 and 8). Everything just fine. But if I try to get the values for disk 2, I'm getting some Values, but not the ones I see on my Dashboard in unraid ... unraid: ioBroker: Whats wrong here ?
  21. Da ich mit sonic in Kontakt stand und ich ursprünglich vorhatte Pi's zu sichern kam dieses jetzt hier raus, ohne jeden Pi mit einem eigenen Skript zu versehen. Ebenso das Shrinken kann ruhig die unraid Kiste machen, da die vermeintlich mehr Power zur verfügung hat als der kleine Pi. @i-B4se Wie würde das denn konkret aussehen, da die Frage glaub ich auch bei @sonic6 aufkam ?!
  22. Yeah, just updated the Docker. And two or three days before the roon core. Everything runs just fine
  23. Quiet happy with this docker. Runs well. But if there'll be an update, would the docker container also be updated or does the core get the update automatically ?
  24. Hallo everyone, firstly thanks to @steini84 for this Plugin, I'm very interested in it. Hope to get it running the way I want. I'm running a relative new unraid Server in my house. AMD Ryzen9 3900X ASRock X570 Taichi Kingston 32GB ECC Ram 2x Seagate Exos X16 16TB (unraid array) 2x WD SN750 1TB NVMe (unraid cache pool) Dell Perc HBA Silverstone CS380 Case (8x Hotswap Bays directly connected to the HBA Card) 2x additional 5,25 to 3,5 Hotswap frames for another two drives (directly connected to the Mainboard) Everything runs just fine at the moment. Learning much on unraid and setting up the machine. Planing started nearly two years ago and first thing this year getting/installing the new hardware. My use case for the ZFS Pool should be something like this: Dockers are directed to the NVMe Cache and in my opinion should stay there (I'm running a Plex Media Server on it and like the speeds of the NVMes while caching media and metadata to the clients in my house). So my plan was that Dockers should stay there. Yesterday I updated my machine and installed two used WD40EFRX drives. I installed them in the additional frames which are connected directly to the mainbaord (everything fine, running preclear at the moment). Planned that all other drives of the unraid array (while I'm updating and getting more drives getting connected to the HBA card). So the two 4TB are separated. I want to set up a nextcloud docker on my System for all of my personal media (atm: ~ 1TB photos ; many sensitive documents and pdfs (plan is to digitialize all of my paper documents and store them); sync of two smartphones to nextcloud etc.) So my szenario would be something like: Installing nextcloud docker and setting it up, but storing all the data of my peronal cloud to a ZFS pool existing of the mentioned two 4TB drives (would also create subfilesystems as mentioned in the first post). Is it possible to do so ? After succesfully preaclearing the drives, they have no filesystem. Do I have to format them with the unassigned devices Plugin or just start creating a ZFS pool through the terminal? If my scenario of setting up is okay, is it still possible to use compression methods of the ZFS system? I mean i want to browse my files on multiple devices (through the nextcloud App). This is absolutely new to me (I only know using Winrar at a normal Windows PC, and e.g. if I RAR some photos i can't browse them without extracting the files before [I don't think that the ZFS compression is working that way?!]) (Only far far away plans: If this all should work the way I want, the possibility of setting up a ZFS one-disk-drive pool (e.g. external USB backup) is just to use the advantages of the ZFS filesystem? So the internal disk pool and the backup disk can ideally communicate) Sorry for my rusty english. Many years ago writing an essay complete in english. Greets from Germany, dan