Friday, 20 April 2018

How to Manage Linux Containers using LXC


How to Manage Linux Containers using LXC


Linux Containers  is a light weight virtualization technology at the operating-system level which is capable of running more than one Linux system on a Linux host. It is an alternative to other traditional hypervisors like KVM and Xen.  Compared to the full machine virtualization techniques, isolation offered in containers is less but at the same time overhead is reduced as a portion of the host kernel and operating system instance gets shared by the virtual machines.  This does not mean that the containers can replace traditional hypervisors as each have their own pros and cons. In this article, we will briefly learn about the installation and usage of one of the popular Linux based container project LXC.

Installation of LXC

I'm using Ubuntu 14.10 for all the examples used here.

LXC can be installed by using the simple apt-get command in Debian based distros (yum in RedHat based ones).  Make sure that you use 'sudo' command everywhere if you are not logged in as root.

sudo apt-get install lxc

Creation, listing, login and stopping

Next you need to create a container by using the lxc-create command

sudo lxc-create -t  <template> -n <container-name>

There are quite a few ready-made templates available for creating containers. They are mostly for the popular Linux distributions. Let us try to create a Ubuntu based container.

sudo lxc-create -t ubuntu -n Ubuntu1

May be you want to help yourself with a cup of coffee as it takes a while for retrieving the required packages and creating the container. :)

lxc-create comand

Output of 'lxc-create' command

lxc-create command

Output of 'lxc-create' command - contd....

Now we have an Ubuntu container with the name Unbuntu1.

Let us now list all the containers that are present on our host.

sudo lxc-ls

poornima@poornima-Lenovo:~$ sudo lxc-ls
[sudo] password for poornima:
Ubuntu1

We can view the complete details using lxc-info

sudolxc-info -n <container-name>

poornima@poornima-Lenovo:~$ sudo lxc-info -n Ubuntu1
Name: Ubuntu1
State: STOPPED

From the output you can see the list of all the containers present on the host and categorised depending on the different states that they are in (running, stopped or frozen).

Containers can be started using the lxc-start command.

lxc-start -n <container-name>

OR

lxc-start -d -n <container-name>    to start the container in the background.

Verify if the container has actually started or not:

poornima@poornima-Lenovo:~$ sudo lxc-info -n Ubuntu1
Name: Ubuntu1
State: RUNNING
PID: 2969
IP: 10.0.3.150
CPU use: 1.27 seconds
BlkIO use: 20.66 MiB
Memory use: 26.27 MiB
KMem use: 0 bytes
Link: vethVFLSOP
TX bytes: 1.80 KiB
RX bytes: 4.94 KiB
Total bytes: 6.74 KiB

In order to login or attach back to the container console, we have lxc-console. 

lxc-console -n <container-name>

poornima@poornima-Lenovo:~$ sudo lxc-console -n Ubuntu1

Connected to tty 1
Type <Ctrl+a q> to exit the console, <ctrl+a ctrl+a=""> to enter Ctrl+a itself

Ubuntu 14.10 Ubuntu1 tty1

Ubuntu1 login: ubuntu
Password:
Last login: Thu Aug 27 12:05:59 IST 2015 on lxc/tty1
Welcome to Ubuntu 14.10 (GNU/Linux 3.16.0-23-generic i686)

* Documentation: https://ift.tt/ABdZxn
ubuntu@Ubuntu1:~$

We can come back to the host's console using the key sequence 'Ctrl+a' followed q.  Note that the container is still running in the background and we have just detached from it.

If you have to stop the container you need to use lxc-stop.

lxc-stop -n <container-name>

poornima@poornima-Lenovo:~$ sudo lxc-stop -n Ubuntu1
poornima@poornima-Lenovo:~$ sudo lxc-info -n Ubuntu1
Name: Ubuntu1
State: STOPPED

Freezing, unfreezing, cloning and poweroff

Containers can be frozen using the lxc-freeze command.

lxc-freeze -n <container-name>

poornima@poornima-Lenovo:~$ sudo lxc-freeze -n Ubuntu1
poornima@poornima-Lenovo:~$ sudo lxc-info -n Ubuntu1
Name: Ubuntu1
State: FROZEN
PID: 2969
IP: 10.0.3.150
CPU use: 1.48 seconds
BlkIO use: 21.42 MiB
Memory use: 26.96 MiB
KMem use: 0 bytes
Link: vethVFLSOP
TX bytes: 2.63 KiB
RX bytes: 5.80 KiB
Total bytes: 8.43 KiB

You can unfreeze them with lxc-unfreeze. 

 lxc-unfreeze -n <container-name>

One can even clone containers using the lxc-clone command. But before issuing the clone command, see to it that you stop the running container first using the lxc-stop command as mentioned previously.

lxc-clone -o <existing container> -n <new container>

poornima@poornima-Lenovo:~$ sudo lxc-clone -o Ubuntu1 -n Ubuntu-clone
Created container Ubuntu-clone as copy of Ubuntu1
poornima@poornima-Lenovo:~$ sudo lxc-ls
Ubuntu-clone Ubuntu1

To poweroff containers, use lxc poweroff when inside the containers console.

ubuntu@Ubuntu1:~$ sudo poweroff
[sudo] password for ubuntu:

Broadcast message from ubuntu@Ubuntu1
(/dev/lxc/tty1) at 12:17 ...

The system is going down for power off NOW!

You can verify from the host that the container has stopped.

poornima@poornima-Lenovo:~$ sudo lxc-info -n Ubuntu1
Name: Ubuntu1
State: STOPPED

Snapshots - creation and restoration

lxc-snapshot command is useful for taking a snapshot of the required container.

lxc-snapshot  -n <container-name>

poornima@poornima-Lenovo:~$ sudo lxc-snapshot -n Ubuntu1
lxc_container: lxccontainer.c: lxcapi_snapshot: 2953 Snapshot of directory-backed container requested.
lxc_container: lxccontainer.c: lxcapi_snapshot: 2954 Making a copy-clone. If you do want snapshots, then
lxc_container: lxccontainer.c: lxcapi_snapshot: 2955 please create an aufs or overlayfs clone first, snapshot that
lxc_container: lxccontainer.c: lxcapi_snapshot: 2956 and keep the original container pristine.

These snapshots will be stored under /var/lib/lxc in Ubuntu 14.10 . In some earlier versions, you can find them in /var/lib/lxcsnaps. 

poornima@poornima-Lenovo:~$ sudo lxc-snapshot --name Ubuntu1 --list
snap0 (/var/lib/lxc/Ubuntu1/snaps) 2015:08:27 12:20:41

Configuration options

By default, all the containers created using lxc are stored under /var/lib/lxc where each container will have a directory. Inside this directory, each containers configuration will be stored in a file called config. The option  lxc.rootfs specifies the location of containers root file system.  lxc. network.type specifies the kind of networking used by that container. Eg, veth

If you are interested in more configuration options, check out man 5 lxc.conf

 Deletion

Containers can be completely destroyed from the host using lxc-destroy command. If you have created any snapshots from the container that you are about to delete, you need to first delete them.

lxc-destroy -n <container-name>

poornima@poornima-Lenovo:~$ sudo lxc-destroy --name=Ubuntu-clone
poornima@poornima-Lenovo:~$ sudo lxc-info --name=Ubuntu-clone
Ubuntu-clone doesn't exist

Management using web console

If you are not a fan of the Linux command line or not comfortable using it, then you can manage your containers using the LXC web panels through your browsers.

Install the web panel using the following command as a root user.

wget https://ift.tt/RDRQgp -O - | bash

root@poornima-Lenovo:/home/poornima# wget https://ift.tt/RDRQgp -O - | bash
--2015-08-27 13:15:13-- https://ift.tt/RDRQgp
Resolving lxc-webpanel.github.io (lxc-webpanel.github.io)... 103.245.222.133
Connecting to lxc-webpanel.github.io (lxc-webpanel.github.io)|103.245.222.133|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2678 (2.6K) [application/x-sh] Saving to: STDOUT

0% [                                                                                                                                                                     ] 0 --.-K/s _ __ _______ __ __ _ _____ _
| | \ \ / / ____| \ \ / / | | | __ \ | |
| | \ V / | \ \ /\ / /__| |__ | |__) |_ _ _ __ ___| |
| | > <| | \ \/ \/ / _ \ '_ \ | ___/ _` | '_ \ / _ \ |
| |____ / . \ |____ \ /\ / __/ |_) | | | | (_| | | | | __/ |
|______/_/ \_\_____| \/ \/ \___|_.__/ |_| \__,_|_| |_|\___|_|
Automatic installer

Installing requirement...
100%[======================================>] 2,678 --.-K/s in 0.003s

2015-08-27 13:15:14 (867 KB/s) - written to stdout [2678/2678]

Cloning LXC Web Panel...
Cloning into '/srv/lwp'...
remote: Counting objects: 188, done.
remote: Total 188 (delta 0), reused 0 (delta 0), pack-reused 188
Receiving objects: 100% (188/188), 172.76 KiB | 49.00 KiB/s, done.
Resolving deltas: 100% (79/79), done.
Checking connectivity... done.

Installation complete!
Adding /etc/init.d/lwp...
Done
Starting server...done.
Connect you on http://your-ip-address:5000/

We can then access the user interface using the URL:  http://:5000 using the default userid / password  admin/admin

LXC web panel login screen

LXC Web panel login screen

Web panel

LXC web panel User Interface

Phew! Now, you are ready to perform all your container related operations using the web panel!!

Conclusion

In this article, we have learnt how to install LXC, use some of the available commands and the web panel. Of late, some tools have been developed which in turn use LXC underneath.  Docker engine and LXD are some of them.  In the near future, it is quite possible for the cloud infrastructure to heavily use Linux containers for all the advantages they have to offer.


IP/DNS Detect


Your IP addresses

31.4.245.90Spain
Spain - Jaen 
No forwarded IP detected. If you are using a proxy, it's a transparent proxy.
IPv6 test not reachable. (error)
Browser default: IPv4 (598 ms)
Fallback: Fail (error)

Your IP addresses - WebRTC detection

192.168.0.103
Private-Use - [RFC1918] 
IANA Private or Special Address 
If you are now connected to a VPN and you see your ISP IP, then your system is leaking WebRTC requests
DNS Addresses - 13 servers, 3 errors.
If you are now connected to a VPN and between the detected DNS you see your ISP DNS, then your system is leaking DNS requests

Torrent Address detection

Geolocation detection

IP Details

IP: 31.4.245.90
Tor Exit Node: Unknown Unknown
AirVPN Exit Node: No No
Country: Spain Spain (ES)
Region: Jaen (J)
City: JaƩn
Time Zone: Europe/Madrid
Latitude & Longitude: 37.7692 , -3.7903
View Larger Map
Accuracy Radius: 200 KM
Last data update: Fri, 20 Apr 2018 10:45:43 +0000

Geek Details

Detected informations
Your User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
What document you can accept: text/html, application/xhtml+xml, application/xml;q=0.9, */*;q=0.8
What language you can accept: en-US, en;q=0.5
What encoding you can accept: gzip, deflate, br
System information
(your browser, your language, your operating system, etc)
Platform: Linux x86_64
Cookie enabled: true
Java enabled: false
Taint enabled: false
Online: true
Screen information
(your display hardware)
Your screen: 1366 x 768
Available screen: 1366 x 734
Color depth: 24
Pixel depth: 24
Plugins information
(your browser plugins)
No data available.
Mime-Types information
(what document you can read)
No data available.
HTTP Request Headers
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
Dnt: 1
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US, en;q=0.5
Accept: text/html, application/xhtml+xml, application/xml;q=0.9, */*;q=0.8
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
Host: www.ipleak.net

What is a "WebRTC leaks"?

WebRTC implement STUN (Session Traversal Utilities for Nat), a protocol that allows to discover the public IP address. To disable it:

  • Mozilla Firefox: Type "about:config” in the address bar. Scroll down to “media.peerconnection.enabled”, double click to set it to false.
  • Google Chrome: Install Google official extension WebRTC Network Limiter.
  • Opera: Type "about:config" in the address bar or go to "Settings". Select "Show advanced settings" and click on "Privacy & security". At "WebRTC" mark select "Disable non-proxied UDP".

What is a "DNS leaks"?

In this context, with "DNS leak" we mean an unencrypted DNS query sent by your system OUTSIDE the established VPN tunnel.

Why my system suffers DNS leaks?

In brief: Windows lacks the concept of global DNS. Each network interface can have its own DNS. Under various circumstances, the system process svchost.exe will send out DNS queries without respecting the routing table and the default gateway of the VPN tunnel, causing the leak.

Should I be worried for a DNS leak?

If you don't want that your ISP, and anybody with the ability to monitor your line, knows the names your system tries to resolve (so the web sites you visit etc.) you must prevent your system to leak DNS. If you feel that you're living in a human rights hostile country, or in any way the above mentioned knowledge may harm you, you should act immediately to stop DNS leaks.

How Does Torrent Detection Work?

To detect data from your torrent client we provide a magnet link to a fake file. The magnet contains an http url of a controlled by us tracker which archives the information coming from the torrent client.


LXD 2.0: Introduction to LXD [1/12] | St��phane Graber's website


A few common questions about LXD

What’s LXD?

At its simplest, LXD is a daemon which provides a REST API to drive LXC containers.

Its main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization.

How does LXD relate to Docker/Rkt?

This is by far the question we get the most, so lets address it immediately!

LXD focuses on system containers, also called infrastructure containers. That is, a LXD container runs a full Linux system, exactly as it would be when run on metal or in a VM.

Those containers will typically be long running and based on a clean distribution image. Traditional configuration management tools and deployment tools can be used with LXD containers exactly as you would use them for a VM, cloud instance or physical machine.

In contrast, Docker focuses on ephemeral, stateless, minimal containers that won’t typically get upgraded or re-configured but instead just be replaced entirely. That makes Docker and similar projects much closer to a software distribution mechanism than a machine management tool.

The two models aren’t mutually exclusive either. You can absolutely use LXD to provide full Linux systems to your users who can then install Docker inside their LXD container to run the software they want.

Why LXD?

We’ve been working on LXC for a number of years now. LXC is great at what it does, that is, it provides a very good set of low-level tools and a library to create and manage containers.

However that kind of low-level tools aren’t necessarily very user friendly. They require a lot of initial knowledge to understand what they do and how they work. Keeping backward compatibility with older containers and deployment methods has also prevented LXC from using some security features by default, leading to more manual configuration for users.

We see LXD as the opportunity to address those shortcomings. On top of being a long running daemon which lets us address a lot of the LXC limitations like dynamic resource restrictions, container migration and efficient live migration, it also gave us the opportunity to come up with a new default experience, that’s safe by default and much more user focused.

The main LXD components

There are a number of main components that make LXD, those are typically visible in the LXD directory structure, in its command line client and in the API structure itself.

Containers

Containers in LXD are made of:

  • A filesystem (rootfs)
  • A list of configuration options, including resource limits, environment, security options and more
  • A bunch of devices like disks, character/block unix devices and network interfaces
  • A set of profiles the container inherits configuration from (see below)
  • Some properties (container architecture, ephemeral or persistent and the name)
  • Some runtime state (when using CRIU for checkpoint/restore)

Snapshots

Container snapshots are identical to containers except for the fact that they are immutable, they can be renamed, destroyed or restored but cannot be modified in any way.

It is worth noting that because we allow storing the container runtime state, this effectively gives us the concept of “stateful” snapshots. That is, the ability to rollback the container including its cpu and memory state at the time of the snapshot.

Images

LXD is image based, all LXD containers come from an image. Images are typically clean Linux distribution images similar to what you would use for a virtual machine or cloud instance.

It is possible to “publish” a container, making an image from it which can then be used by the local or remote LXD hosts.

Images are uniquely identified by their sha256 hash and can be referenced by using their full or partial hash. Because typing long hashes isn’t particularly user friendly, images can also have any number of properties applied to them, allowing for an easy search through the image store. Aliases can also be set as a one to one mapping between a unique user friendly string and an image hash.

LXD comes pre-configured with three remote image servers (see remotes below):

  • “ubuntu:” provides stable Ubuntu images
  • “ubunt-daily:” provides daily builds of Ubuntu
  • “images:” is a community run image server providing images for a number of other Linux distributions using the upstream LXC templates

Remote images are automatically cached by the LXD daemon and kept for a number of days (10 by default) since they were last used before getting expired.

Additionally LXD also automatically updates remote images (unless told otherwise) so that the freshest version of the image is always available locally.

Profiles

Profiles are a way to define container configuration and container devices in one place and then have it apply to any number of containers.

A container can have multiple profiles applied to it. When building the final container configuration (known as expanded configuration), the profiles will be applied in the order they were defined in, overriding each other when the same configuration key or device is found. Then the local container configuration is applied on top of that, overriding anything that came from a profile.

LXD ships with two pre-configured profiles:

  • “default” is automatically applied to all containers unless an alternative list of profiles is provided by the user. This profile currently does just one thing, define a “eth0” network device for the container.
  • “docker” is a profile you can apply to a container which you want to allow to run Docker containers. It requests LXD load some required kernel modules, turns on container nesting and sets up a few device entries.

Remotes

As I mentioned earlier, LXD is a networked daemon. The command line client that comes with it can therefore talk to multiple remote LXD servers as well as image servers.

By default, our command line client comes with the following remotes defined

  • local: (default remote, talks to the local LXD daemon over a unix socket)
  • ubuntu: (Ubuntu image server providing stable builds)
  • ubuntu-daily: (Ubuntu image server providing daily builds)
  • images: (images.linuxcontainers.org image server)

Any combination of those remotes can be used with the command line client.

You can also add any number of remote LXD hosts that were configured to listen to the network. Either anonymously if they are a public image server or after going through authentication when managing remote containers.

It’s that remote mechanism that makes it possible to interact with remote image servers as well as copy or move containers between hosts.

Security

One aspect that was core to our design of LXD was to make it as safe as possible while allowing modern Linux distributions to run inside it unmodified.

The main security features used by LXD through its use of the LXC library are:

  • Kernel namespaces. Especially the user namespace as a way to keep everything the container does separate from the rest of the system. LXD uses the user namespace by default (contrary to LXC) and allows for the user to turn it off on a per-container basis (marking the container “privileged”) when absolutely needed.
  • Seccomp. To filter some potentially dangerous system calls.
  • AppArmor: To provide additional restrictions on mounts, socket, ptrace and file access. Specifically restricting cross-container communication.
  • Capabilities. To prevent the container from loading kernel modules, altering the host system time, …
  • CGroups. To restrict resource usage and prevent DoS attacks against the host.

Rather than exposing those features directly to the user as LXC would, we’ve built a new configuration language which abstracts most of those into something that’s more user friendly. For example, one can tell LXD to pass any host device into the container without having to also lookup its major/minor numbers to manually update the cgroup policy.

Communications with LXD itself are secured using TLS 1.2 with a very limited set of allowed ciphers. When dealing with hosts outside of the system certificate authority, LXD will prompt the user to validate the remote fingerprint (SSH style), then cache the certificate for future use.

The REST API

Everything that LXD does is done over its REST API. There is no other communication channel between the client and the daemon.

The REST API can be access over a local unix socket, only requiring group membership for authentication or over a HTTPs socket using a client certificate for authentication.

The structure of the REST API matches the different components described above and is meant to be very simple and intuitive to use.

When a more complex communication mechanism is required, LXD will negotiate websockets and use those for the rest of the communication. This is used for interactive console session, container migration and for event notification.

With LXD 2.0, comes the /1.0 stable API. We will not break backward compatibility within the /1.0 API endpoint however we may add extra features to it, which we’ll signal by declaring additional API extensions that the client can look for.

Containers at scale

While LXD provides a good command line client, that client isn’t meant to manage thousands of containers on multiple hosts. For that kind of use cases, we have nova-lxd which is an OpenStack plugin that makes OpenStack treat LXD containers in the exact same way it would treat VMs.

This allows for very large deployments of LXDs on a large number of hosts, using the OpenStack APIs to manage network, storage and load-balancing.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you can’t wait until the next few posts to try LXD, you can take our guided tour online and try it for free right from your web browser!


Linux Containers - LXD - Getting started - command line


Installation¶

Choose your release

LXD upstream maintains two release branches in parallel:

  • LTS release (LXD 2.0.x)
  • Feature releases (LXD 2.x)

LTS releases are recommended for production environments as they will benefit from regular bugfix and security updates but will not see new features added or any kind of behavioral change.

To get all the latest features and monthly updates to LXD, use the feature release branch instead.

Getting the packages

Alpine Linux

To install the feature branch of LXD, run:

apk add lxd

ArchLinux

Instructions on how to use the AUR package for LXD can be found here

Alternatively, the snap package can also be used on ArchLinux (see below).

Note that in both cases, you will need to build and install the linux-userns kernel.

Fedora

Instructions on how to use the COPR repository for LXD can be found here.

Alternatively, the snap package can also be used on Fedora (see below).

Gentoo

To install the feature branch of LXD, run:

emerge --ask lxd

Ubuntu 14.04 LTS

To install the LTS branch of LXD, run:

apt install -t trusty-backports lxd lxd-client

Ubuntu 16.04 LTS

To install the LTS branch of LXD, run:

apt install lxd lxd-client

To install the feature branch of LXD, run:

apt install -t xenial-backports lxd lxd-client

Snap package (ArchLinux, Debian, Fedora, OpenSUSE and Ubuntu)

LXD upstream publishes and tests a snap package which works for a number of Linux distributions.

The list of Linux distributions we currently test our snap for can be found here.

For those distributions, you should first install snapd using those instructions.

After that, you can install LXD with:

snap install lxd

MacOS builds

LXD upstream publishes builds of the LXD client for macOS through Homebrew.

To install the feature branch of LXD, run:

brew install lxc

Windows builds

Native builds of the LXD client for Windows can be found here.

Installing from source

Instructions on building and installing LXD from source can be found here.

Initial configuration

Before you can create containers, you need to tell LXD a little bit about your storage and network needs.

This is all done with:

sudo lxd init

Access control

Access control for LXD is based on group membership. The root user as well as members of the "lxd" group can interact with the local daemon.

If the "lxd" group is missing on your system, create it, then restart the LXD daemon. You can then add trusted users to it. Anyone added to this group will have full control over LXD.

Because group membership is normally only applied at login, you may need to either re-open your user session or use the "newgrp lxd" command in the shell you're going to use to talk to LXD.

Creating and using your first container

Creating your first container is as simple as:

lxc launch ubuntu:16.04 first

That will create and start a new Ubuntu 16.04 container as can be confirmed with:

lxc list

Your container here is called "first". You also could let LXD give it a random name by just calling "lxc launch ubuntu:16.04" without a name.

Now that your container is running, you can get a shell inside it with:

lxc exec first -- /bin/bash

Or just run a command directly:

lxc exec first -- apt-get update

To pull a file from the container, use:

lxc file pull first/etc/hosts .

To push one, use:

lxc file push hosts first/tmp/

To stop the container, simply do:

lxc stop first

And to remove it entirely:

lxc delete first

Container images

LXD is image based. Containers must be created from an image and so the image store must get some images before you can do much with LXD.

There are three ways to feed that image store:

  1. Use one of the the built-in image remotes
  2. Use a remote LXD as an image server
  3. Manually import an image tarball

Using the built-in image remotes

LXD comes with 3 default remotes providing images:

  1. ubuntu: (for stable Ubuntu images)
  2. ubuntu-daily: (for daily Ubuntu images)
  3. images: (for a bunch of other distros)

To start a container from them, simply do:

lxc launch ubuntu:14.04 my-ubuntu
lxc launch ubuntu-daily:16.04 my-ubuntu-dev
lxc launch images:centos/6/amd64 my-centos

Using a remote LXD as an image server

Using a remote image server is as simple as adding it as a remote and just using it:

lxc remote add my-images 1.2.3.4
lxc launch my-images:image-name your-container

An image list can be obtained with:

lxc image list my-images:

Manually importing an image

If you already have a lxd-compatible image file, you can import it with:

lxc image import <file> --alias my-alias

And then start a container using:

lxc launch my-alias my-container

See the image specification for more details.

Multiple hosts

The "lxc" command line tool can talk to multiple LXD servers and defaults to talking to the local one.

Remote operations require the following two commands having been run on the remote server:

lxc config set core.https_address "[::]"
lxc config set core.trust_password some-password

The former tells LXD to bind all addresses on port 8443. The latter sets a trust password to be used by new clients.

Now to talk to that remote LXD, you can simply add it with:

lxc remote add host-a <ip address or DNS>

This will prompt you to confirm the remote server fingerprint and then ask you for the password.

And after that, use all the same command as above but prefixing the container and images name with the remote host like:

lxc exec host-a:first -- apt-get update

Installing DokuWiki , nginx, PHP on Ubuntu 14.04 64


Installing DokuWiki , nginx, PHP on Ubuntu 14.04 64

A wiki allows you to organize and present documentation without the overhead of a complex system.

DokuWiki is free, open source, and requires very little out of a server. Using a very familiar interface, it allows you to easily scale and optimize using many advanced features. Utilizing files instead of a database, DokuWiki is extremely flexible with the type of system it will run on (no database server required).


For these instructions, we will be installing and configuring nginx and php. Additionally, will download and configure DokuWiki.

The instructions were verified on InterServer`s OpenVZ VPS Hosting service, utilizing Ubuntu 14.04 64-bit minimal version.

Requirements:

  • Tested on InterServer’s OpenVZ VPS Hosting with Ubuntu 14.04 64-bit (instructions are for minimal distribution, but should work for regular distribution as well).
  • Putty or similar SSH client
  • root login and password or an account capable of sudo
  • nano or similar text editor installed on the server.
  • Make sure Apache is not installed (they can co-exist, with Apache behind nginx, however, that is out of scope with these instructions).

Installation

First, will run an update:

  • sudo apt-get update

Install nginx:

  • sudo apt-get install nginx

Verify the server installed

Using your web browser, type the URL or IP Address of your server. If your site does not come up with the name, this could indicate a DNS issue, so try the IP address that was listed in your welcome email from your host, instead.

You should see this screen:

Install PHP

Will install php5-fpm (FastCGI Process Manager) and php5-gd:

  • sudo apt-get install php5-fpm php5-gd

We need to configure a few things for everything to work correctly:

First, will configure fpm:

  • sudo nano /etc/php5/fpm/php.ini

There is a potential security problem with one of the settings. If a script is not found, a similarly named script may be executed. This requirement is also documented in /etc/nginx/sites-available/default, which will work on in the next step. Let's make the change:

We are trying to find a line that says: ;cgi.fix_pathinfo=1. We can find it by hitting Ctrl-W and searching for cgi.fix.

We will remove the semicolon (if present), and change it to:

  • cgi.fix_pathinfo=0
  • ctrl-x and y will save the file.

Restart PHP FPM:

  • sudo service php5-fpm restart

At the moment, we have nginx installed, php-fpm is configured. We just need to finish nginx so everything is connected.

Let's open the sites-available/default file (if you already have nginx installed from before, the file may be called yourdomain.com):

sudo nano /etc/nginx/sites-available/default

Make the following changes:

Find the line that says: index index.html index.htm; and change it to:

  • index index.php index.html index.htm;

Find the line that says: server_name localhost; and change it to:

  • server_name yourdomain.com_or_IP;

There is also some functionality that is currently disabled with a # and we want to change that. Let's enable them by erasing the #:

#error_page 404 /404.html;

#error_page 500 502 503 504 /50x.html;

#location = /50x.html {

# root /usr/share/nginx/html;

#}

A few more:

#location ~ .php$ {

# fastcgisplitpath_info ^(.+.php)(/.+)$;

# fastcgi_pass unix:/var/run/php5-fpm.sock;

# fastcgi_index index.php;

# include fastcgi_params;

#}

ctrl-x and y will save the file.

Restart nginx, so the changes take effect:

sudo service nginx restart

Let’s test, to make sure everything is working. Will create a php file with a single command to show us PHP and nginx are working:

  • sudo nano /usr/share/nginx/html/test.php

Will use phpinfo() to show our configuration. Write the following line:

  • <?php phpinfo(); ?>
  • Close out and save Ctrl-X and y

Use your browser to view test.php (yourdomain.com/test.php, or yourip/test.php)

  • If you received the same page as the picture above, your server is ready.

Let’s clean up the file we created for testing:

  • sudo rm /usr/share/nginx/html/test.php

Install DokuWiki

First will go to our home directory for download:

  • cd ~

Download DokuWiki:

  • sudo wget http://download.dokuwiki.org/src/dokuwiki/dokuwiki-stable.tgz

Unpack it:

  • sudo tar xzvf dokuwiki-stable.tgz

Clean up:

  • sudo rm dokuwiki-stable.tgz

Move the directory so nginx can see it:

  • sudo mv dok* /usr/share/nginx/html/wiki

Let's set owners so nginx can write into our wiki:

  • sudo chown -R www-data /usr/share/nginx/html/wiki/data
  • sudo chown www-data /usr/share/nginx/html/wiki/lib/plugins/
  • sudo chown www-data /usr/share/nginx/html/wiki/conf

Everything is installed and configured on the command line side. Let's go to our browser and finish our configurations:

  • Visit yourdomain.com/wiki/install.php
  • Configure the Wiki, with your login, password, etc.
  • Select the type of public powers you want to grant.
  • We are done configuring. Follow the links on the page to learn how to customize DokuWiki.

Let's clean up:

  • sudo rm /usr/share/nginx/html/wiki/install.php

This website is supported by our affiliation with web hosting companies. We encourage you to visit our friends at Interserver. They really do offer $6 per month VPS Hosting. Linux, windows and cpanel available, have fast service, and caring customer service!

See more articles in: Instructions, Videos, Ubuntu, nginx, wiki


How to install DokuWiki on Debian Wheezy with Nginx


How to install DokuWiki on Debian Wheezy with Nginx


dokuwiki
DokuWiki is very simple to use open-source wiki software that doesn’t require a database and it is mainly aimed at creating documentation of any kind.
To install DokuWiki on a virtual server with Debian Wheezy follow the very easy steps described below. The installation instructions should apply to any Debian based server with Nginx and PHP-FPM installed on it.

Make sure your Debian VPS is up-to-date:

apt-get update
apt-get upgrade

‘apt-get update’ will refresh your package list so it’s all up to date, then the upgrade will upgrade any packages that have newer versions.

Install Nginx and PHP-FPM using the following command:

apt-get install nginx php5-fpm php5-cli php5-mcrypt php5-gd

Download and unpack the latest version of DokuWiki available at https://ift.tt/1l3LoLP :

cd /root
wget http://download.dokuwiki.org/src/dokuwiki/dokuwiki-stable.tgz -O dokuwiki.tgz
tar -xvf dokuwiki.tgz

Create a new Nginx server block. For example, create a new Nginx configuration file to the ‘/etc/nginx/sites-available’ directory:

vi /etc/nginx/sites-available/yourdomain.com

and add the following content:

server {
server_name yourdomain.com;
listen 80;
root /var/www/yourdomain.com/;
access_log /var/log/nginx/yourdomain.com-access.log;
error_log /var/log/nginx/yourdomain.com-error.log;

index index.php index.html doku.php;
location ~ /(data|conf|bin|inc)/ {
      deny all;
}
location ~ /\.ht {
      deny  all;
}
location ~ \.php {
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(.*)$;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}

Create a symbolic link using the following command:

ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/yourdomain.com

Restart the Nginx web server for the changes to take effect:

/etc/init.d/nginx restart

Move the DokuWiki installation files to the document root directory defined in the Nginx server block above:

mv /root/dokuwiki-* /var/www/yourdomain.com

The webserver user (www-data) needs to be able to write to ‘data’ , ‘conf’ and ‘lib/plugins/’ directories, so you can easily accomplish that by executing the following command:

chown -R www-data:www-data /var/www/yourdomain.com/

Open https://ift.tt/1Hqjcch in a web browser. Enter the following information: your site name, username, password and email address for the admin user, then click ‘Save’.

Once the installation is complete, our recommendation is to install ‘captcha’ and ‘preregister’ plug-ins in order to protect the registration against spam bots which create a huge amount useless fake users.

install dokuwiki

Delete the install script:

rm /var/www/yourdomain.com/install.php

That is it. The DokuWiki installation is now complete.

Of course you don’t have to do any of this if you use one of our Linux VPS Hosting services, in which case you can simply ask our expert Linux admins to install DokuWiki for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.


How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 14.04 LTS | DigitalOcean


How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 14.04 LTS

views

Introduction

When using the Nginx web server, server blocks (similar to the virtual hosts in Apache) can be used to encapsulate configuration details and host more than one domain off of a single server.

In this guide, we'll discuss how to configure server blocks in Nginx on an Ubuntu 14.04 server.

Prerequisites

We're going to be using a non-root user with sudo privileges throughout this tutorial. If you do not have a user like this configured, you can make one by following steps 1-4 in our Ubuntu 14.04 initial server setup guide.

You will also need to have Nginx installed on your server. If you want an entire LEMP (Linux, Nginx, MySQL, and PHP) stack on your server, you can follow our guide on setting up a LEMP stack in Ubuntu 14.04. If you only need Nginx, you can install it by typing:

sudo apt-get update
sudo apt-get install nginx

When you have fulfilled these requirements, you can continue on with this guide.

For demonstration purposes, we're going to set up two domains with our Nginx server. The domain names we'll use in this guide are example.com and test.com.

You can find a guide on how to set up domain names with DigitalOcean here. If you do not have two spare domain names to play with, use dummy names for now and we'll show you later how to configure your local computer to test your configuration.

Step One — Set Up New Document Root Directories

By default, Nginx on Ubuntu 14.04 has one server block enabled by default. It is configured to serve documents out of a directory at:

/usr/share/nginx/html

We won't use the default since it is easier to work with things in the /var/www directory. Ubuntu's Nginx package does not use /var/www as its document root by default due to a Debian policy about packages utilizing /var/www.

Since we are users and not package maintainers, we can tell Nginx that this is where we want our document roots to be. Specifically, we want a directory for each of our sites within the /var/www directory and we will have a directory under these called html to hold our actual files.

First, we need to create the necessary directories. We can do this with the following command. The -p flag tells mkdir to create any necessary parent directories along the way:

sudo mkdir -p /var/www/example.com/html
sudo mkdir -p /var/www/test.com/html

Now that you have your directories created, we need to transfer ownership to our regular user. We can use the $USER environmental variable to substitute the user account that we are currently signed in on. This will allow us to create files in this directory without allowing our visitors to create content.

sudo chown -R $USER:$USER /var/www/example.com/html
sudo chown -R $USER:$USER /var/www/test.com/html

The permissions of our web roots should be correct already if you have not modified your umask value, but we can make sure by typing:

sudo chmod -R 755 /var/www

Our directory structure is now configured and we can move on.

Step Two — Create Sample Pages for Each Site

Now that we have our directory structure set up, let's create a default page for each of our sites so that we will have something to display.

Create an index.html file in your first domain:

nano /var/www/example.com/html/index.html

Inside the file, we'll create a really basic file that indicates what site we are currently accessing. It will look like this:

<html>
    <head>
        <title>Welcome to Example.com!</title>
    </head>
    <body>
        <h1>Success!  The example.com server block is working!</h1>
    </body>
</html>

Save and close the file when you are finished.

Since the file for our second site is basically going to be the same, we can copy it over to our second document root like this:

cp /var/www/example.com/html/index.html /var/www/test.com/html/

Now, we can open the new file in our editor and modify it so that it refers to our second domain:

nano /var/www/test.com/html/index.html
<html>
    <head>
        <title>Welcome to Test.com!</title>
    </head>
    <body>
        <h1>Success!  The test.com server block is working!</h1>
    </body>
</html>

Save and close this file when you are finished. You now have some pages to display to visitors of our two domains.

Step Three — Create Server Block Files for Each Domain

Now that we have the content we wish to serve, we need to actually create the server blocks that will tell Nginx how to do this.

By default, Nginx contains one server block called default which we can use as a template for our own configurations. We will begin by designing our first domain's server block, which we will then copy over for our second domain and make the necessary modifications.

Create the First Server Block File

As mentioned above, we will create our first server block config file by copying over the default file:

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/example.com

Now, open the new file you created in your text editor with root privileges:

sudo nano /etc/nginx/sites-available/example.com

Ignoring the commented lines, the file will look similar to this:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /usr/share/nginx/html;
    index index.html index.htm;

    server_name localhost;

    location / {
        try_files $uri $uri/ =404;
    }
}

First, we need to look at the listen directives. Only one of our server blocks can have the default_server specification. This specifies which block should server a request if the server_name requested does not match any of the available server blocks.

We are eventually going to disable the default server block configuration, so we can place the default_server option in either this server block or in the one for our other site. I'm going to leave the default_server option enabled in this server block, but you can choose whichever is best for your situation.

The next thing we're going to have to adjust is the document root, specified by the root directive. Point it to the site's document root that you created:

root /var/www/example.com/html;

Note: Each Nginx statement must end with a semi-colon (;), so check each of your lines if you are running into problems.

Next, we want to modify the server_name to match requests for our first domain. We can additionally add any aliases that we want to match. We will add a www.example.com alias to demonstrate:

server_name example.com www.example.com;

When you are finished, your file will look something like this:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/example.com/html;
    index index.html index.htm;

    server_name example.com www.example.com;

    location / {
        try_files $uri $uri/ =404;
    }
}

That is all we need for a basic configuration. Save and close the file to exit.

Create the Second Server Block File

Now that we have our initial server block configuration, we can use that as a basis for our second file. Copy it over to create a new file:

sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/test.com

Open the new file with root privileges in your editor:

sudo nano /etc/nginx/sites-available/test.com

In this new file, we're going to have to look at the listen directives again. If you left the default_server option enabled in the last file, you'll have to remove it in this file. Furthermore, you'll have to get rid of the ipv6only=on option, as it can only be specified once per address/port combination:

listen 80;
listen [::]:80;

Adjust the document root directive to point to your second domain's document root:

root /var/www/test.com/html;

Adjust the server_name to match your second domain and any aliases:

server_name test.com www.test.com;

Your file should look something like this with these changes:

server {
    listen 80;
    listen [::]:80;

    root /var/www/test.com/html;
    index index.html index.htm;

    server_name test.com www.test.com;

    location / {
        try_files $uri $uri/ =404;
    }
}

When you are finished, save and close the file.

Step Four — Enable your Server Blocks and Restart Nginx

You now have your server blocks created, we need to enable them.

We can do this by creating symbolic links from these files to the sites-enabled directory, which Nginx reads from during startup.

We can create these links by typing:

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/test.com /etc/nginx/sites-enabled/

These files are now in the enabled directory. However, the default server block file we used as a template is also enabled currently and will conflict with our file that has the default_server parameter set.

We can disable the default server block file by simply removing the symbolic link. It will still be available for reference in the sites-available directory, but it won't be read by Nginx on startup:

sudo rm /etc/nginx/sites-enabled/default

We also need to adjust one setting really quickly in the default Nginx configuration file. Open it up by typing:

sudo nano /etc/nginx/nginx.conf

We just need to uncomment one line. Find and remove the comment from this:

server_names_hash_bucket_size 64;

Now, we are ready to restart Nginx to enable your changes. You can do that by typing:

sudo service nginx restart

Nginx should now be serving both of your domain names.

Step Five — Set Up Local Hosts File (Optional)

If you have not been using domain names that you own and instead have been using dummy values, you can modify your local computer's configuration to allow you to temporarily test your Nginx server block configuration.

This will not allow other visitors to view your site correctly, but it will give you the ability to reach each site independently and test your configuration. This basically works by intercepting requests that would usually go to DNS to resolve domain names. Instead, we can set the IP addresses we want our local computer to go to when we request the domain names.

Make sure you are operating on your local computer during these steps and not your VPS server. You will need to have root access, be a member of the administrative group, or otherwise be able to edit system files to do this.

If you are on a Mac or Linux computer at home, you can edit the file needed by typing:

sudo nano /etc/hosts

If you are on Windows, you can find instructions for altering your hosts file here.

You need your server's public IP address and the domains you want to route to the server. Assuming that my server's public IP address is 111.111.111.111, the lines I would add to my file would look something like this:

127.0.0.1   localhost
127.0.0.1   guest-desktop
111.111.111.111 example.com
111.111.111.111 test.com

This will intercept any requests for example.com and test.com and send them to your server, which is what we want if we don't actually own the domains that we are using.

Save and close the file when you are finished.

Step Six — Test your Results

Now that you are all set up, you should test that your server blocks are functioning correctly. You can do that by visiting the domains in your web browser:

http://example.com

You should see a page that looks like this:

Nginx first server block

If you visit your second domain name, you should see a slightly different site:

http://test.com

Nginx second server block

If both of these sites work, you have successfully configured two independent server blocks with Nginx.

At this point, if you adjusted your hosts file on your local computer in order to test, you'll probably want to remove the lines you added.

If you need domain name access to your server for a public-facing site, you will probably want to purchase a domain name for each of your sites. You can learn how to set them up to point to your server here.

Conclusion

You should now have the ability to create server blocks for each domain you wish to host from the same server. There aren't any real limits on the number of server blocks you can create, so long as your hardware can handle the traffic.