Jump to content

ken-ji

Members
  • Content Count

    917
  • Joined

  • Last visited

  • Days Won

    4

ken-ji last won the day on June 27 2018

ken-ji had the most liked content!

Community Reputation

114 Very Good

About ken-ji

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Philippines

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I think your USB stick should work just fine. If it doesn't, just add a USB expansion card. Adding a USB expansion card probably won't work as the UEFI/Bios needs to boot off it.
  2. reread the bulletin and the exploit and it seems the flaw only activates (from within the container when a docker cp command is being run from the host - which as docker security engineers point out, is a very small window of opportunity and a compromised docker container is necessary to get the whole exploit started.
  3. Hi, just checking back at your logs and I see this May 31 21:17:39 NasUnraid-bis ntpd[1693]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 31 22:22:51 NasUnraid-bis kernel: scsi 10:0:9:0: Direct-Access ATA WDC WD20EZRZ-22Z 0A80 PQ: 0 ANSI: 6 May 31 22:22:51 NasUnraid-bis kernel: scsi 10:0:9:0: SATA: handle(0x0013), sas_addr(0x500304800156ba44), phy(4), device_name(0x00000000332e3020) May 31 22:22:51 NasUnraid-bis kernel: scsi 10:0:9:0: enclosure logical id (0x50030442523a2033), slot(0) May 31 22:22:51 NasUnraid-bis kernel: scsi 10:0:9:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y) May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: Power-on or device reset occurred May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: Attached scsi generic sg24 type 0 May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: [sdx] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB) May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: [sdx] 4096-byte physical blocks May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: [sdx] Write Protect is off May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: [sdx] Mode Sense: 7f 00 10 08 May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: [sdx] Write cache: enabled, read cache: enabled, supports DPO and FUA May 31 22:22:51 NasUnraid-bis kernel: sdx: May 31 22:22:51 NasUnraid-bis kernel: sd 10:0:9:0: [sdx] Attached SCSI disk Seems like for some reason, the newly added disk does not power-up until 1h 10m after the enclosure is started up. Didn't see anything in the logs. have you tried this? While the array is stopped, unplug the drive. wait 5m and stick it back in. Is it detected? I'm guessing something is up with the expander you are using, but I never had such an issue. then again my expander/enclosure is only an 8 port thing and I don't have any other bays chained off it. I see that you have a lot of WD 2T Blue drive, it might a good idea to at least move up to 4TB drives (preferrably 8), but you'll neeed to get 3 to start with as the parity drives need to be upgraded too, as they would give you a better storage density. and you'll be able to need to have less drives overall.
  4. Sounds serious, but would only affect users who still keep actual data inside docker containers and need to manually copy them out.
  5. Connect everything then get the diagnostics file Tools | Diagnostics, hopefully people will see a clue to your problem
  6. There is one thing to consider with NAS vs Desktop, and this is dependent of your case/enclosure. Modern NAS drives have vibration sensors to compensate for having a higher density of drives packed together.
  7. If you stop the array, how many drives slots are listed in the Main page? Have you tried stopping the array and moving a disk in slot1..8 to slot9 , then checking if Unraid sees the disk still?
  8. My entire array is in an external JBOD enclosure
  9. Depending on the exact size of my browser, sometimes, this happens: The third column starts wherever the 2nd column ends, I've folded the 2nd column to hide personal data and also to make the whole thing fit my display. When opened up:
  10. Not going to try to convince you - but as a PSA, containers vs VMs - they have the same practical security unless your hardware and OS supports CPU and memory isolation (IBM Power series is the only thing I've seen that does this) in which case a VM wins hands down. The important part of security is the application, because even with netfilter protection, a VM has its entire network stack is still technically exposed for any possible vulnerabilities. With containers and macvlan (almost forgot about that one), the attack surface shrinks to the application running in the container in its dedicated IP. And standard containers using the default bridge networking is like having a simple linux router in front to provide portforwarding. The usual portforwarding using a router in front of either a VM or container helps a lot with the security. AFAIK, running nextcloud with a vulnerability in a container would leave the attacker in the nextcloud container, which presumably has nothing else to leverage off, and they would need to figure out a way to move to another target - the data in the mysqldb container? or the host? In the VM scenario, the attacker would have been closer to access the data and about the same to access the host. The bit about HVM is that we are in 2019, but not all countries and users have the money or access to 2015+ stuff and some are still rocking something from the early 2000's and for them only containers work. I for example have decided to use only a Pentium G4620 processor, because an emby container with HW transcoding on the iGPU would work even better than an i3/i5/i7 + Nvidia GPU in a VM at a significant fraction of the cost. (the fact that I'm using an ITX board and I need the only slot for my HBA is also a factor here)
  11. Shouldn't be a headache as the migration would of course be finished as soon as unraid can detect all your disks as properly connected. Then upgrading parity would be a simple remove and replace.
  12. One option that might work is to setup unRAID on the current PC without touching or overwriting the OS drive, or the Storage Spaces drives. You need to have at least one hard drive (get 8TB if you can, and remember the Parity Drive requirement to be just as big or bigger than other drives) and another smaller drive (preferrably an SSD) to act a cache and hosting a Windows Server 2019 VM. * Unplug window server 2019 HDD so we don't accidentally touch it. Unplug the storage space drives as well to be safe. * Boot up unRAID. * Assign the new drive as array member. * Assign the other drive as a cache drive. * Create a Windows Server 2019 VM. * Shutdown * Attach the storage spaces drives (by USB is you absolutely have too - I don't know if there will be recognition problems on Windows) * Passthru the disks to the VM (by attaching the USB bay, or perform controller pass thru) * Windows Server 2019 should be able to mount the storage spaces drives * Windows Server 2019 should now be able to move files from a single drive (or at least migrate and fill out the unRAID drive) * Remove freed drive from storage spaces. shutdown VM. detach from VM * Stop Unraid array, add the freed drive * Repeat and loop until all drives have been processed. * Delete Windows 2019 VM * Delete Window 2019 installation * Reuse last HDD * Add parity drive There, you should be done.
  13. A VM with all the apps would only be faster than multiple containers if the data flows using unix sockets, otherwise its the same thing. A VM is about as secure as a container A VM is only an option if your server has hardware support, but can still get really slow when doing certain cryptographic operations unless the CPU does it in hardware. A VM can only access other server hardware if the server has hardware passthru support. A container can access server hardware if the server OS has drivers for it. A container works without needing hardware support, and will always run at baremetal speeds - not counting possible networking issues. A container doesn't even need firewalling as only the application running in it would be exposed on the network interface. A container doesn't need patching, only checking if the application and related libraries have vulnerabilities. I'm also an old school sysadmin, but I see that containers work better than VMs for many situations, unless you want to support multiple tenancy, hardware passthru, and or complex firewalling. I mean, do you really need to emulate an RTC, or a floppy controller to run a web server?
  14. rsync will always check the full list of files everytime and the non-default option of using checksums will definitely cause all files on both sides to be read, to generate the checksums for comparison. rclone sending to cloud storage I think uses the size and timestamp check first, before falling back to the checksums (depends on the cloud provider)
  15. you seem to have a 2nd NIC: eth1 ? or did you remove this at some point? run docker network list run docker network inspect [name of network] run iptables -vnL