Leaderboard

Popular Content

Showing content with the highest reputation on 01/09/19 in Posts

  1. Unlikely shut down replace one disk with a new one of the same or larger size, but no larger than parity. boot up assign new disk to the slot for the missing disk start array to begin rebuild If for some reason it offers to Format anything, DON'T. After the rebuild completes run a non-correcting parity check. Keep the original disk for now in case it is needed as backup for file recovery. Repeat for other disk. If you have any doubts, questions, or problems, come back for further advice.
    1 point
  2. Also I do not live in the US, my VPS where I host my VPN in located in romain & one of the reason I use it is they have "Allow IRC Servers, VPN, Torrents, Free DMCA" not to mention that the speeds are awesome where I can get my full 100mbps down without an issue & pay 2.5 USD for the service. I have tried many VPN providers through the years, the speeds are just lacking.
    1 point
  3. Disk dropped offline, rebooting should bring it back online, though it will remain disabled, change the SATA cable when you reboot and check power cable
    1 point
  4. You can use myJDownloader with a mobile app/browser extension. To my knowledge, this can be used to resolve any captcha.
    1 point
  5. If the ATT modem is forwarding WAN port 80 to LAN 180, and the google wifi is connected to the ATT LAN, then you need to tell the google wifi to forward port 180 to your unraid docker IP, not 80. The info has to make it all the way through and back. Internet port 80 <-> 80WAN ATT moves it to 180LAN <-> 180WAN Google Wifi 180LAN <-> 180 Docker LE 80 to application. Same thing with 443. The next device in the chain has to be listening to the correct port, you told google wifi to listen to 80, and the ATT to talk to 180.
    1 point
  6. Just a quick note in case somebody else gets stuck on this: I was having trouble getting my VM to see the USB stick, which was plugged into a USB3.0 port. I had to change the emulated controller from the default ehci USB2.0 to be an xhci USB3.0 one (either nec or qemu works, though a quick google seems to show qemu is preferred) to get it to work.
    1 point
  7. By can't uncheck, you mean it's grayed out right ? Just to be sure, you did provide enough data to the plugin to perform any given operation (chose files/folders and target drive in gather, the full steps in scatter). Other than that, I wouldn't really know. In Chrome, you could take a look at the Developer Console and check for any error.
    1 point
  8. The last time I checked, most of the SCSI changes were implemented in 4.19. I haven't done a full 4.19 vs 4.20 breakdown in the SCSI and FS areas/modules to see what additional changes were implemented. If 6.7 drops with 4.19, we "should" be good. If 6.7 comes with a Slackware re-baseline, even better, as there are several updated packages that would compliment the improvements. The other aspect I have been becoming familiar with is UNMAP. Similar to FSTRIM, it provides instructions to the PCI bus to perform certain actions. Again, learning as time permits. Nevertheless, it seems the SCSI community acknowledged the collapse of several modules and programming language/library optimization has effected several functionalities in the HBA world. I'm really hoping it all comes to bed at 4.19 or 4.20. Again, fingers crossed.
    1 point
  9. MAN I KNOW RIGHT! I always want to create new accounts on boards i've never posted on and ask questions that are crazily discussed at holiday parties! It's like the other day, for the second night of Hanukah, me and Ishmael were having some friends over, and some guy was like "if I was going to do 10gbe in my house, I would go fiber, i don't care about the cable expense!" But this other guy was all "MAN, you gotta go copper. It's a little pricey now for the cards, but it's the future!" So after that, I went and created an account on the synology forums and posted: Do you use 10gbe? I have tried several 10gbe hardware providers like Intel, Quanta and as of lately, Mellanox. Would rank Mellanox the highest because it has the best quality for a very low price, really impressed with the provider. What do you use? Totally legit. Welcome to the forums.
    1 point
  10. Using a network share as a Time Machine destination is problematic, even using Apple's own Time Capsule. An external hard disk plugged into the USB port of your Mac is much faster and much more reliable. The weakness is in the use of the sparse bundle disk image. A sparse bundle consists of a containing folder, a few database files, a plist that stores the current state of the image and nested subfolders containing many many thousands of small (8 MiB) 'band' files. An image file has to be used in order to recreate the necessary HPFS+J file system that Time Machine requires to support hard linked folders on network storage, while using many small files to create the image rather than one huge monolithic file allows updates to be made at a usable speed. But sparse bundles have shown themselves to be fragile and often Time Machine will detect a problem, spend a long time checking the image for consistency and then give up, prompting the user that it needs to start over from scratch, losing all the previous backups. Because there's a disk image on a file server a double mount is involved. First the Mac has to mount the network share, then it has to mount the sparse bundle disk image. Once that is done it treats the disk image exactly as it would a locally connected hard disk that you have dedicated to Time Machine use. Sparse bundles grow by writing more band files so you have the opportunity to specify the maximum size of the image. If you don't it will eventually grow until it fills up all the space available on the share. If you still want to do it, here's what I'd do. First create a user on your Unraid server that's only going to be used for Time Machine. Let's call that user 'tm'. Set a password. Now enable user shares, disable disk shares and create a new user share. Let's call it 'TMBackups'. Include just one disk and set Use cache disk to No. You can set Minimum Free Space quite low (e.g. 10 MB) since the largest files in the share will be the 8 MiB band files. Allocation method and split level are irrelevant if your user share is restricted to a single disk. Under SMB Security Settings for the share, turn Export off. Under NFS Security Settings, confirm that Export is off. Under AFP Security Settings, turn Export on. If you want to restrict the size occupied by your Time Machine backups do so here. Even if you want it to be able to use all the disk it's worth setting a limit so that it doesn't become totally full. I fill up my Unraid disks but I like to leave about 10 GB free. Set Security to Private and give your new user 'tm' exclusive read/write access. Consider moving the Volume dbpath away from its default location (the root of the share) to somewhere faster, where it's less likely to get damaged. The Volume database is the .AppleDB folder that can be found in the root of an AFP share. It too is fragile and benefits from fast storage. I moved mine onto my cache pool by entering the path /mnt/user/system/AppleDB (i.e. a subfolder of the pre-existing Unraid 'system' share, which by default is of type cache:prefer). This will improve both the speed and reliability of AFP shares, so do it to any other user shares you have that are exported via AFP. The system will automatically create a sub-folder named for the share, so in this example the .AppleDB folder gets moved from the root of the share and placed in a folder called /mnt/user/system/AppleDB/TMBackups. Now that the user share is set up, go to your Mac. Open a Finder window and in the left-hand pane under Shared, find Tower-AFP and click it. In the right hand pane make sure you connect as user 'tm', not as your regular macOS user and not as Guest. You'll need to enter the password associated with tm's account on your server. Check the box to store the credentials on your keyring. The TMBackups share should be displayed in the Finder window. Mount it. Now open Time Machine Preferences and click "Add or remove backup disk". Under Available Disks you will see "TMBackups on Tower-AFP.local". Choose it. If Time Machine ever asks you to provide credentials then enter those for the 'tm' user, not your regular macOS user. Enter a name for the disk image (say, "Unraid TM Backups") and do not choose the encryption option, and let Time Machine create the sparse bundle image, mount it and write out the initial full backup, which will take a long time. Once it has completed, Time Machine should unmount the Unraid TM Backups image but it will probably leave the TMBackups share mounted, since you mounted it manually in the first place. You can unmount it manually or leave it until you reboot your Mac. From then on, each time Time Machine runs it will automatically mount the user share (keeping it hidden), automatically mount the sparse bundle image, perform the backup and tidy up after itself, then it will unmount the image then unmount the share, all without your interaction. The use of a dedicated 'tm' account offers a degree of security but if you want your backups to be encrypted then use unRAID's encrypted whole disk file system, not the encryption option offered by Time Machine.
    1 point
  11. I am building my server now. I appreciate these videos, which will make the deployment of my hardware much easier. Thank you.
    1 point