Jump to content

tech_rkn

Members
  • Posts

    63
  • Joined

  • Last visited

Everything posted by tech_rkn

  1. Vu le peu de réponses ou d'intérêt ( 60 lectures, 0 participations ), abandon du projet unraid pour ma part. La communauté semble ere en mode hibernatus. Pour ma part, je suis presque sur que c'est un bug soit d'unraid, soit de swag par rapport a la version 6.9.1 En plus, je suis encore trop débutant par rapport a nginx, docker pour tous maitriser et configurer seul. J'ai testé deux autres projets qui fonctionnent parfaitement sur ma machine de test. Openmediavault et truenas. Mais idem, c'est beaux, jolie, et dockerisé. En plus, sa bouge plus sur leur forum. Allez, hop, bye .... je passerais continuer mes test avec les version a venir.
  2. hi guys, had the same error on a ubuntu server system. I did fix it by using this parameter at bootime. amd_iommu=off amd_iommu=soft works also. semms to be an amdgpu problem.
  3. Bonjour a tous les frenchies ici. Rodolphe, 52 ans, qqs connaissances linux ayant permit de créer un DYI nextcloud sensé être éducatif et temporaire basé sur une Debian8/apache2/php7/mariadb lvm /mdadm/ en .ovh pour le domaine, le tout sous ipv4. c'était en 2018... Ce DIY est basé sur une vielle carte mere asus avec 8 ports sata natifs et un proc AthlonFx de 2006, 1 disque (ssd) système, des cages icy-dock et 6 disques hdd seagate pro de 4TO, énorme pour moi a l'époque. Là, en 2021, après des mini signes de défaillances, je pense qu'il est temps de tous changer ( hard et soft ) sauf mes disques qui sont encore parfaitement viable. Mon but, suite a la découverte des tuto de superboki serait d'utiliser unraid avec un docker nextcloud en ipv6 de préférence, en conservant mes cages et les 6 disques actuels pour le stockage. J'ai un CM Gygabytes B450M-Ds3h rev 0.1, un ryzen3 2200G, qui fonctionnent avec unraid v6.9.1. J'ai aussi 3 disques de récup qui me permettent de tester en condition réels, via une carte pci vers 8xsata basée sur un contrôleur Marvell 88SE9215. A priori, en suivants les tutos, j'arrive jusqu'à l'étape de cloudcommander et de swag. Et là, deux truc bizarre sous ipv6 et/ou ipv4. Les champs ports mapping des containers docker ne me semblent pas etre bindé correctement ( 0.0.0.0:xxx ). A noter que cloudcommender fonctionne quand meme. Par contre pour swag, c'est une autre histoire. Au lancement du container, j'ai une "execution error", alors que j'ai Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='swag' --net='dockernet' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'EMAIL'='[email protected]' -e 'URL'='rkn2.ovh' -e 'SUBDOMAINS'='www,cloudcmd' -e 'ONLY_SUBDOMAINS'='false' -e 'VALIDATION'='http' -e 'DNSPLUGIN'='' -e 'EXTRA_DOMAINS'='' -e 'STAGING'='false' -e 'DUCKDNSTOKEN'='' -e 'PROPAGATION'='' -e 'PUID'='99' -e 'PGID'='100' -p '80:80/tcp' -p '443:443/tcp' -v '/mnt/user/appdata/swag':'/config':'rw' --cap-add=NET_ADMIN 'linuxserver/swag' b05c98af741c9402e340256e33e80d8e71eddd5254ac62166f238bf1d821fdd4 The command finished successfully! A noter que j'ai rajouter le CAA de letsencrypt pour aussi valider en dns si besoin ( testé et ok ) Dans les logs, j'ai quelques truc un peu zarb: nmbd[1932]: NOTE: NetBIOS name resolution is not supported for Internet Protocol Version 6 (IPv6). ( pas grave ) webGUI: Successful login user root from 2a01:e0a:225:4e50:: ( cool ) kernel: mdcmd (38): set md_num_stripes 1280 ( bla bla array start correctement ) kernel: mdcmd (39): set md_queue_limit 80 kernel: mdcmd (40): set md_sync_limit 5 kernel: mdcmd (41): set md_write_method kernel: mdcmd (42): start STOPPED kernel: unraid: allocating 20870K for 1280 stripes (4 disks) kernel: md1: running, size: 976762552 blocks kernel: md2: running, size: 468850520 blocks je relance docker root: starting dockerd ... avahi-daemon[2326]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. avahi-daemon[2326]: New relevant interface docker0.IPv4 for mDNS. avahi-daemon[2326]: Registering new address record for 172.17.0.1 on docker0.IPv4. je lance swag et la, j'ai cette erreur: driver failed programming external connectivity on endpoint swag (8reduit8): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use" Donc, j'en déduit un soucis de config du network.... A noter que je réplique cette erreur en repartant sur une install unraid fresh ... avec ou sans "docker network create xxxx" Des idées pour me dépêtrer de cette caguade ?? Surement une connerie de base.... Merci d'avance !!! si besoins de capture écran ou autre, je peux transmettre via mon nextcloud originel ... ; tower-diagnostics-20210407-0004.zip
  4. OK, update and closure. My low quality usb stick from ENUODA USB 3.0 64GB is unreliable. After repartitioning with specific tools and recreating with unraid, i was able to reboot without error, and also reproduce hardware failure when unraid shutdown throught GUI. The Kingston DTSE9 is much better and did not suffer data lost when unraid goes boom ... like electric reboot or genuine GUI shutdown. Thx for help pointing to the good direction ( my first guess ) . Consider as CLOSED
  5. The stick was created using unraid usb tool, of course. I am testing it's reliability ( some kind of low brand from china ) . So far so good. But noneless, i will re-partitioning it to a fresh fat32 status, and re-use unraind usb tool.... I will also test a Kingston DTSE9 16G stick to be sure. Thx for help.
  6. Hi, As I am new here, a moderator will have to move my post. Hardware related: do you mean bad usb stick or else ?
  7. Hi, As I am new here, a moderator will have to move my post. Hardware related: do you mean bad usb stick or else ?
  8. Hello community, this is my first post unfortunately. Apologies for my english as i am not english, lol. I am testing unraid on my futur rig based on a gigabytes B450M-DS3H rev0.1 with a ryzen 3 2200G. Also have a pci-e to sata 6 extender 8 ports, based on a Marvell 9215 + JMB575 ( this card is fully supported by Ubuntuserver, have one running in production on my diy nextcloud server for over 3 years now ) day one: - created the USB media and choose stable 6.9.1 - on my futur rig, installed unraid, claim a test license for 30 days. - created a test array with 3 storages device, no cache disk yet. - basic config successfully done ( ipv6, virtual network, plugins, dockers container ( cloudcommander, swag, nextcloud ... ) - access from internal network and from other external device ok. Day two: - sanity check on the morning ok. - using GUI, did a shutdown - hardware stop. - Hardware reboot. - USB key is missing boot sector. dead. - reinstall 6.9.1 on USB. - Hardware reboot. - 6.9.1 kernel panic: VFS cannot open root devicee "(null)" or unknown-block(0,0): error -6 follow by Kernel pacnic -- not syncing : VFS: Unable to mount root fs on unknown-block(0,0) call trace dump_stach+0xx6b/0x83 panix+0xff/0x2a1 mount_block_root+0x131/0x160 ? rest_init+0xxaf/0xaf kernel_init+0x5/0xaf ret_from_fork+0x22/0x30 Kernel Offset disabled ---[ end kernel pacnic ] My guess is a locked array from the first test. The usb stcik dying a hardware reboot is not a good sign !!! The key is new. bought it via amazon in a promotional package, and this is not the stick i am planning to use. Q1? --> is this a "bad stick" behavior ?? Kernel pacnic: am i stuck or not ? Don't remember seen any XFS or else format type been shown yesterday. Q2? --> As i cannot even boot unraid in safe mode, is there a way to troubleshoot my rig... or should i use external tools to destroy the array leftover from day 1... Thank
  9. Hello community, this is my first post unfortunately. Apologies for my english as i am not english, lol. I am testing unraid on my futur rig based on a gigabytes B450M-DS3H rev0.1 with a ryzen 3 2200G. Also have a pci-e to sata 6 extender 8 ports, based on a Marvell 9215 + JMB575 ( this card is fully supported by Ubuntuserver, have one running in production on my diy nextcloud server for over 3 years now ) day one: - created the USB media and choose stable 6.9.1 - on my futur rig, installed unraid, claim a test license for 30 days. - created a test array with 3 storages device, no cache disk yet. - basic config successfully done ( ipv6, virtual network, plugins, dockers container ( cloudcommander, swag, nextcloud ... ) - access from internal network and from other external device ok. Day two: - sanity check on the morning ok. - using GUI, did a shutdown - hardware stop. - Hardware reboot. - USB key is missing boot sector. dead. - reinstall 6.9.1 on USB. - Hardware reboot. - 6.9.1 kernel panic: VFS cannot open root devicee "(null)" or unknown-block(0,0): error -6 follow by Kernel pacnic -- not syncing : VFS: Unable to mount root fs on unknown-block(0,0) call trace dump_stach+0xx6b/0x83 panix+0xff/0x2a1 mount_block_root+0x131/0x160 ? rest_init+0xxaf/0xaf kernel_init+0x5/0xaf ret_from_fork+0x22/0x30 Kernel Offset disabled ---[ end kernel pacnic ] My guess is a locked array from the first test. The usb stcik dying a hardware reboot is not a good sign !!! The key is new. bought it via amazon in a promotional package, and this is not the stick i am planning to use. Q1? --> is this a "bad stick" behavior ?? Kernel pacnic: am i stuck or not ? Don't remember seen any XFS or else format type been shown yesterday. Q2? --> As i cannot even boot unraid in safe mode, is there a way to troubleshoot my rig... or should i use external tools to destroy the array leftover from day 1... Thank
×
×
  • Create New...