Jump to content

Hoopster

Members
  • Posts

    4,568
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by Hoopster

  1. I made the same suggestion over the weekend to SpencerJ. He indicated the whole team is very concerned about the recent spate of attacks against unRAID servers. They are looking at several ways to get the word out about proper security measures and external access; including in the GUI itself. As JorgeB indicated, there is nothing inherently less or more secure about the most recent version of unRAID. Almost all these cases come down to new users not understanding how to properly secure their servers or more experienced users forgetting they had left the back door open. It certainly does appear that there are one or more hacker actively looking for exposed unRAID servers.
  2. You should post full diagnostics (Tools --> Diagnostics in the GUI) in your next post and not just the syslog as the full diagnostics will provide much more information about your system configuration. The system share is the default home of docker.img and libvirt.img. Typically, that is a cache only or cache prefer share which means formatting your cache drive wiped out these files unless you had changed their location. Fortunately, you can restore your previous docker applications with their previous configuration intact (Previous Apps in the Apps tab) even after creating a new docker.img file.
  3. Good! Glad you got it worked out. Your "No Soup for You" post and follow up post made it sound like you were unable to enter the token and were stuck at that message so I found a post about that situation in the Plex forums. Personally, I had no idea if that would be a solution as it is nothing I ever needed to do. Nevertheless, thanks to Saarg, you found a way to enter the token and are up an running so that is what counts.
  4. @Garry050 Any top-level folder you create on a disk is automatically a user share. User shares can span disks according to the allocation method, split level and minimum free settings, so the same folder names will get created automatically as needed on other disks that are included in the share. Normally, you create user shares in the Shares tab, but if you just create a folder at the top-level of a disk, it becomes a share. EDIT: Disks you add to the array automatically are named Disk 1, Disk 2, Disk 3, etc. The disk you have named as "nas" must be a cache pool which you gave that name, however, the same rules apply to top-level folders in cache pools.
  5. If you have reformatted, you need to run the appropriate make_bootable file for your OS (make_bootable.bat on Windows run as Administrator).
  6. That would be the Intel Celeron J3455. The board has the CPU built-in thus the name.
  7. This thread is six years old and the OP has not visited the forums in over three years. You might get some good answers in the Plex forums but here is a blog about moving Plex database from Windows to docker in unRAID. It is over four years old so perhaps there is now a better way.
  8. unRAID always uses eth0. If you install a new NIC and it is not eth0, you can assign the MAC address of the desired NIC to eth0 in Settings --> Network Settings.
  9. As documented by Squid in the Docker FAQ forum: How to get the docker run command From a post in the Plex support forums: I finally got it to work. None of the “support” articles were of much help, but how I ended up resolving it: 1. Connect a monitor, keyboard and mouse to the server (which is normally headless), and clear the browser of history and cache, 2. disconnect the server from the internet 3. Accessing the GUI from localhost:32400/web/ 4. From there I was finally able to see the server 5. econnect the server to the internet 6. before logging in (I tried that and it went right back to “no soup for you”), click the “claim server” button; I was then prompted to login. 7. the server was claimed and I’m now able to access server settings. In the end, if my unraid server was able to access the Plex sign-in servers during this process, it would not work. They must have some machine identifier somewhere that is preventing you from just logging in as a fresh server.
  10. According to the StarTech webpage for that NIC, it uses the Tehuti tn4010 driver. This is confirmed by looking at your PCi devices list in the diagnostics. In the release notes for unRAID 6.9.0 it states the following (likely applies to 6.9.1 as well as nothing in those release note says it has changed): Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted. These drivers are omitted: Highpoint RocketRaid r750 (does not build) Highpoint RocketRaid rr3740a (does not build) Tehuti Networks tn40xx (does not build) If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives. Better yet, pester the manufacturer of the controller and get them to update their drivers. It appears the Linux kernel in unRAID 6.9.1 no longer supports the driver for your NIC. EDIT: there is also this line from your syslog Mar 17 17:46:54 Orthanc kernel: tn40xx: loading out-of-tree module taints kernel. and later there are three call traces (not good) that appear related to the NIC Mar 17 17:46:54 Orthanc kernel: tn40xx: register_netdev failed Mar 17 17:46:54 Orthanc kernel: ------------[ cut here ]------------ Mar 17 17:46:54 Orthanc kernel: WARNING: CPU: 5 PID: 1520 at net/core/dev.c:9447 rollback_registered_many+0x88/0x4ad Mar 17 17:46:54 Orthanc kernel: Modules linked in: edac_mce_amd kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd igb glue_helper mpt3sas i2c_piix4 i2c_algo_bit raid_class tn40xx(O+) nvme scsi_transport_sas i2c_core wmi_bmof rapl mxm_wmi k10temp nvme_core wmi ccp ahci libahci button acpi_cpufreq Mar 17 17:46:54 Orthanc kernel: CPU: 5 PID: 1520 Comm: udevd Tainted: G W O 5.10.21-Unraid #1 Mar 17 17:46:54 Orthanc kernel: Hardware name: System manufacturer System Product Name/ROG STRIX B450-F GAMING, BIOS 3003 12/09/2019 Mar 17 17:46:54 Orthanc kernel: RIP: 0010:rollback_registered_many+0x88/0x4ad Mar 17 17:46:54 Orthanc kernel: Code: 01 e8 64 19 17 00 0f 0b 48 8b 55 00 48 8b 0a 48 8d 42 98 48 83 e9 68 48 8d 78 68 48 39 ef 74 2d 8a 90 90 04 00 00 84 d2 75 09 <0f> 0b e8 65 cd ff ff eb 0d fe ca c6 80 91 04 00 00 01 74 02 0f 0b Mar 17 17:46:54 Orthanc kernel: RSP: 0018:ffffc90000affa38 EFLAGS: 00010246 Mar 17 17:46:54 Orthanc kernel: RAX: ffff888103f98000 RBX: ffff888103f98000 RCX: ffffc90000affa18 Mar 17 17:46:54 Orthanc kernel: RDX: ffff888103f98000 RSI: 0000000000000000 RDI: ffff888103f98068 Mar 17 17:46:54 Orthanc kernel: RBP: ffffc90000affa80 R08: 0000000000000000 R09: 00000000ffffdfff Mar 17 17:46:54 Orthanc kernel: R10: ffffc90000aff8f8 R11: ffffc90000aff8f0 R12: ffffc90000111000 Mar 17 17:46:54 Orthanc kernel: R13: ffff888103f988c0 R14: 0000000000000000 R15: ffff888103f98000 Mar 17 17:46:54 Orthanc kernel: FS: 000014cff488ebc0(0000) GS:ffff88840e940000(0000) knlGS:0000000000000000 Mar 17 17:46:54 Orthanc kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 17 17:46:54 Orthanc kernel: CR2: 0000000001033000 CR3: 0000000103080000 CR4: 00000000003506e0 Mar 17 17:46:54 Orthanc kernel: Call Trace: Mar 17 17:46:54 Orthanc kernel: unregister_netdevice_queue+0x95/0xed Mar 17 17:46:54 Orthanc kernel: unregister_netdev+0x15/0x1b Mar 17 17:46:54 Orthanc kernel: bdx_probe+0xa25/0xaad [tn40xx] Mar 17 17:46:54 Orthanc kernel: local_pci_probe+0x3c/0x7a Mar 17 17:46:54 Orthanc kernel: pci_device_probe+0x140/0x19a Mar 17 17:46:54 Orthanc kernel: ? sysfs_do_create_link_sd.isra.0+0x6b/0x98 Mar 17 17:46:54 Orthanc kernel: really_probe+0x157/0x341 Mar 17 17:46:54 Orthanc kernel: driver_probe_device+0x63/0x92 Mar 17 17:46:54 Orthanc kernel: device_driver_attach+0x37/0x50 Mar 17 17:46:54 Orthanc kernel: __driver_attach+0x95/0x9d Mar 17 17:46:54 Orthanc kernel: ? device_driver_attach+0x50/0x50 Mar 17 17:46:54 Orthanc kernel: bus_for_each_dev+0x70/0xa6 Mar 17 17:46:54 Orthanc kernel: bus_add_driver+0xfe/0x1af Mar 17 17:46:54 Orthanc kernel: driver_register+0x99/0xd2 Mar 17 17:46:54 Orthanc kernel: ? bdx_tx_timeout+0x2c/0x2c [tn40xx] Mar 17 17:46:54 Orthanc kernel: do_one_initcall+0x71/0x162 Mar 17 17:46:54 Orthanc kernel: ? do_init_module+0x19/0x1eb Mar 17 17:46:54 Orthanc kernel: ? kmem_cache_alloc+0x108/0x130 Mar 17 17:46:54 Orthanc kernel: do_init_module+0x51/0x1eb Mar 17 17:46:54 Orthanc kernel: load_module+0x1a44/0x203c Mar 17 17:46:54 Orthanc kernel: ? map_kernel_range_noflush+0xdf/0x255 Mar 17 17:46:54 Orthanc kernel: ? __do_sys_init_module+0xc4/0x105 Mar 17 17:46:54 Orthanc kernel: ? _cond_resched+0x1b/0x1e Mar 17 17:46:54 Orthanc kernel: __do_sys_init_module+0xc4/0x105 Mar 17 17:46:54 Orthanc kernel: do_syscall_64+0x5d/0x6a Mar 17 17:46:54 Orthanc kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Mar 17 17:46:54 Orthanc kernel: RIP: 0033:0x14cff4ef009a Mar 17 17:46:54 Orthanc kernel: Code: 48 8b 0d f9 7d 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c6 7d 0c 00 f7 d8 64 89 01 48 Mar 17 17:46:54 Orthanc kernel: RSP: 002b:00007ffef575f7f8 EFLAGS: 00000202 ORIG_RAX: 00000000000000af Mar 17 17:46:54 Orthanc kernel: RAX: ffffffffffffffda RBX: 000000000065e8f0 RCX: 000014cff4ef009a Mar 17 17:46:54 Orthanc kernel: RDX: 000014cff4fd097d RSI: 00000000000aed58 RDI: 000000000068b570 Mar 17 17:46:54 Orthanc kernel: RBP: 000000000068b570 R08: 000000000064701a R09: 0000000000000006 Mar 17 17:46:54 Orthanc kernel: R10: 0000000000647010 R11: 0000000000000202 R12: 000014cff4fd097d Mar 17 17:46:54 Orthanc kernel: R13: 00007ffef575fd60 R14: 000000000066d270 R15: 000000000065e8f0 Mar 17 17:46:54 Orthanc kernel: ---[ end trace 338e0f1fa31dc83e ]---
  11. If you reinstall the docker containers from the Previous Apps section of CA, they will be reinstalled with your prior configuration intact. If you reinstall them by searching in Community Apps and downloading again, they will be reinstalled with the defaults as a new installation.
  12. Here are some facts about unRAID that will hopefully be helpful to you: 1 - unRAID is not a full-fledged Linux OS. It is a very stripped-down version of Slackware Linux and only contains the necessary pieces to run the unRAID NAS appliance 2 - unRAID has no firewall capability and has NEVER been advertised as a secure, Internet hardened operating system. It should not be exposed directly to the Internet 3 - unRAID cannot be exposed to the Internet via the default installation of the OS. As @trurl has pointed out to you, something in your router configuration has been done to expose your unRAID server to the Internet 4 - There are many secure ways of accessing your server over the Internet. WireGuard is one of several and happens to be built into unRAID. 5 - The "root" user allows local GUI and terminal access. It should always have a secure password. The GUI is not intended to be accessed directly via the Internet. 6 - There is not really the traditional users concept as exists in other OSes as they are not really necessary. You can control access to unRAID shares in a certain way via additional users/rights settings 7 - Port forwarding/opening ports (assuming it is done in the correct way) is not a huge exposure risk and is necessary for the WAN/LAN interactions need to allow secure remote access and services.
  13. I had the same issue with Telegraf in 6.9.0. This has been fixed for me in 6.9.1. Drives now spin down properly even with telegraf running. If this does not work for you, perhaps something else beside telegraf is causing a problem. With telegraf it appeared to be the call to "/usr/sbin/smartctl" that was preventing the drives from spinning down as Limetech changed a bit how they deal with smartctl.
  14. unRAID has no firewall itself and will depend on whatever protections you have at the router level. You need to make sure you have good firewall protection enabled in the router and that all ports are closed (except those you need to forward for specific purposes). Also make sure the unRAID server IP address has not been placed in the router DMZ, if it has one. A DMZ basically bypasses all firewall and routing rules and lets anything in it be exposed to the outside world. Run the GRC Shields Up scan from a computer on your LAN to see what your exposure through the router/firewall is currently. UPDATE: You should also run the common ports and all service ports scans to see which ports are currently open and responding to probes from the Internet.
  15. Do you have your server in a DMZ? Is port 22 open to the big bad world? Such login attempts from the outside world are the result of opening up unRAID on the Internet somehow. There are ways to securely access your server remotely without exposing common ports for direct access. WireGuard is built into unRAID OpenVPN docker container ZeroTier The new My Server unRAID plugin with SSL access Lets Encrypt reverse proxy Same for me since 2011. As an OS appliance, unRAID is not intended to be directly exposed to the Internet. EDIT: and no unsuccessful login attempts other than me either
  16. unRAID crea recursos compartidos de usuario (User Shares) que son carpetas que pueden abarcar varios discos. La expansión del disco se realiza automáticamente. Por ejemplo, si tiene un recurso compartido de "Media" y le ha dicho que use todos los discos, unRAID copiará automáticamente el contenido a otro disco una vez que el disco actual alcance el límite de almacenamiento determinado por el método de asignación, el nivel de división y la configuración de espacio libre mínimo. Todas las carpetas de nivel superior de un disco se convierten automáticamente en recursos compartidos de usuario (User Shares). En lugar de crear carpetas en el disco, debe crear recursos compartidos de usuario a través de la pestaña "Shares" en unRAID y establecer los parámetros del recurso compartido. Dentro de esos "user shares", puede crear las subcarpetas necesarias para fines de organización. Cada "Share" puede incluir o excluir discos específicos para su uso por ese "share." Mi Share "Movies" come ejemplo: Los grupos de caché (Cache Pools) pueden incluir uno o más discos (generalmente SSD) con el propósito de almacenar en cache los archivos escritos antes de que vayan a la matriz o para almacenar el share "appdata" (contenedores de Docker), discos de máquinas virtuales, descargas, etc. Cada share puede tener un grupo de cache diferente asignado o los shares pueden usar el mismo grupo de cache. Los grupos de cache no se incluyen en la matriz protegida por paridad.
  17. El tiempo que se tarda en crear la información de paridad inicial depende del tamaño del disco de paridad. Por ejemplo, cuando tenía discos de paridad de 3 TB hace unos años, el cálculo de la paridad tardaba entre 7 y 8 horas. Ahora que tengo unidades de paridad de 8 TB, se necesitan aproximadamente 16,5 horas. Si su cálculo de paridad inicial informa que tomará muchos días o incluso una gran cantidad de horas en relación con su tamaño, esto es una indicación de problemas de hardware o cableado. Es posible que el sistema se utilice normalmente durante los cálculos de paridad, pero es posible que vea algunos impactos en el rendimiento del sistema. Si solo está creando shares (carpetas de nivel superior), agregando plugins, haciendo cambios de configuración, etc., debería ver muy poco impacto. Escribir grandes cantidades de datos nuevos en la matriz o intentar transmitir películas durante una verificación de paridad probablemente tendrá un impacto. Después de la creación de la paridad inicial, la paridad se mantiene en tiempo real siempre que se escriben datos en la matriz. Se recomiendan verificaciones de paridad todos los meses para asegurarse de que los datos sean válidos. Estos tomarán el mismo tiempo que la creación de la paridad inicial. El mío se hace el día 15 de cada mes y ayer tomó las 16,5 horas normales. Hay un plugin que le permite programar verificaciones de paridad durante varios días para que pueda ejecutarse solo por la noche y no interfiera con sus usos diarios del servidor. Cuando se agrega un nuevo disco a la matriz, se recomienda que primero se haga un "preclear" para probar el disco a fondo. Este "preclear" puede llevar mucho tiempo dependiendo de qué tan completamente elijas ejecutarlo. El "preclear" de mis discos de matriz de 8 TB tarda unas 35 horas. Sin embargo, el "preclear" de un nuevo disco se puede realizar mientras el sistema está funcionando normalmente y no afecta el rendimiento del sistema. No es normal que vuela al inglés al reiniciar el sistema. No utilizo yo otro idioma pero debe haber una solución. La información de configuración del sistema, los plugins, la configuración de los contenedor de docker, etc. se escribe en la unidad flash y debe persistir durante los reinicios. Tanto WireGuard como OpenVPN están configurados en mi servidor. Prefiero WireGuard, ya que está integrado en UNRAID. Hay foros de soporte para ambos (solo en inglés). WireGuard quickstart y soporte OpenVPN-AS soporte Limetech recomienda un USB flash de 32GB or menos. Muchos de 64GB no funcionan bien.
  18. Yes, SSD as a cache drive with appdata, domains, system shares set to cache-prefer or cache-only shares. Cache-prefer will be needed initially in order to move anything from these shares from the array to the cache drive (with the Mover) once you have it installed and configured in your unRAID server.
  19. This is why it is recommended to install appdata, docker.img, etc. on an SSD. There are always things going on with docker and individual containers that could cause an HDD that stores these things to be spun up. Database maintenance, backups, config changes, any access to any file in appdata or docker.img for any reason will cause the HDD to be spun up. Particular docker apps don't necessarily need to be running for access which could cause disk spin up.
  20. Go to the terminal and type 'top' and see if wsdd is the offending process. If it is wsdd, go into Settings -->> SMB and either disable WSD if you don't need to use it, or, enter the parameter shown below in WSD Options:
  21. Check in top/htop from the CLI and see what process(es) is pegging the CPU. Is it always the same CPU/thread at 100% or does it move around to a different one?
  22. I am at 36C when both unRAID and the VM are idling; however, it does not take much to bump it up 20C and under high load, as others have mentioned, it can get quite hot. I have seen temps up in the low 80s when all cores/threads are getting a good workout.
  23. This is a fairly standard list based on the way VMs work. Here is the list I see with a few additions:
  24. It's in your DelugeVPN folder in appdata. The wireguard folder is a subfolder of that appdata folder.
  25. I do not have that in Extra Parameters either and it is working fine. Probably would no hurt to add it though. Endpoint comes from the list in your Supervisord.log file. Pick one of these and modify wg0.conf with the one you want. It defaults to nl-amsterdam : 2021-03-10 20:05:57,230 DEBG 'start-script' stdout output: [info] List of PIA endpoints that support port forwarding:- 2021-03-10 20:05:57,231 DEBG 'start-script' stdout output: [info] al.privacy.network [info] ad.privacy.network [info] austria.privacy.network [info] brussels.privacy.network [info] ba.privacy.network [info] sofia.privacy.network [info] czech.privacy.network [info] denmark.privacy.network [info] ee.privacy.network [info] fi.privacy.network [info] france.privacy.network [info] de-frankfurt.privacy.network [info] de-berlin.privacy.network [info] gr.privacy.network [info] hungary.privacy.network [info] is.privacy.network [info] ireland.privacy.network [info] man.privacy.network [info] italy.privacy.network [info] lv.privacy.network [info] liechtenstein.privacy.network [info] lt.privacy.network [info] lu.privacy.network [info] mk.privacy.network [info] malta.privacy.network [info] md.privacy.network [info] monaco.privacy.network [info] montenegro.privacy.network [info] nl-amsterdam.privacy.network [info] no.privacy.network [info] poland.privacy.network [info] pt.privacy.network [info] ro.privacy.network [info] rs.privacy.network [info] sk.privacy.network [info] spain.privacy.network [info] sweden.privacy.network [info] swiss.privacy.network [info] ua.privacy.network [info] uk-london.privacy.network [info] uk-southampton.privacy.network [info] uk-manchester.privacy.network [info] uk-2.privacy.network [info] bahamas.privacy.network [info] ca-montreal.privacy.network [info] ca-toronto.privacy.network [info] ca-ontario.privacy.network [info] ca-vancouver.privacy.network [info] greenland.privacy.network [info] mexico.privacy.network [info] panama.privacy.network [info] ar.privacy.network [info] br.privacy.network [info] venezuela.privacy.network [info] yerevan.privacy.network [info] bangladesh.privacy.network [info] cambodia.privacy.network [info] china.privacy.network [info] cyprus.privacy.network [info] georgia.privacy.network [info] hk.privacy.network [info] in.privacy.network [info] israel.privacy.network [info] japan.privacy.network [info] kazakhstan.privacy.network [info] macau.privacy.network [info] mongolia.privacy.network [info] philippines.privacy.network [info] qatar.privacy.network [info] saudiarabia.privacy.network [info] sg.privacy.network [info] srilanka.privacy.network [info] taiwan.privacy.network [info] tr.privacy.network [info] ae.privacy.network [info] vietnam.privacy.network [info] aus-perth.privacy.network [info] au-sydney.privacy.network [info] aus-melbourne.privacy.network [info] nz.privacy.network [info] dz.privacy.network [info] egypt.privacy.network [info] morocco.privacy.network [info] nigeria.privacy.network [info] za.privacy.network
×
×
  • Create New...