Usar e instalar la imagen de mongo mapeando el puerto 27017 y los datos en /data/db
docker run --rm -d -v /home/alfred/tmp/data:/data/db -v /tmp:/tmp -p 27017:27017 --name mongo mongo
Usar e instalar la imagen de mysql mapeando los datos en /data/db con password the_pass
docker run -v /home/alfred/tmp/data:/data/db -p 3306:3306 -e MYSQL_ROOT_PASSWORD=the_pass mysql
Lo mismo que el anterior pero usando la versión 8.0:
docker run -v /home/alfred/tmp/data:/data/db -p 3306:3306 -e MYSQL_ROOT_PASSWORD=the_pass -d mysql:8.0
To access: mysql -u root -h 127.0.0.1 -p
sudo docker run -p 5432:5432 -v /home/alfred/tmp/postgresdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=mysecretpassword postgres docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword postgres:9.4
sudo docker run -p 8000:80 -v /home/alfred/php/:/var/www/html php:7.0-apache
Tendrás que añadir un fichero .htaccess con el siguiente contenido:
allow from all
1. Crea un archivo Dockerfile y añade:
FROM php:7.0-apache
2. Haz:
$ sudo build -t my-php
3. Ejecuta:
$ sudo docker run -d -p 8000:80 -v /home/alfred/php/:/var/www/html my-php
Para instalarlo en un server virtual, este ha de ser KVM.
Containers are completely self-contained machine, for all intents and purposes it has its own OS, its own file system and anything else you would expect to find in a virtualised machine. But the catch is that a container only runs one program†. For example you may have a MySQL server running in a container and Redis running in a separate container.
Each container can have directories (zero or more) mounted to it from the host. For example if you were running an Apache web server container you would not load the source files onto the container itself. Rather you would mount a directory of the host operating system (containing the files for the web server) to a directory of the container, like:
/Users/elliot/Development/mywebsite → /var/www/html
This makes the containers (and images) stateless. Containers can be restarted and images can be destroyed without affecting the application. It also makes the images much smaller and reusable. Another advantage is that several containers can share the same mounted directory.
Usually you will have several containers for other services like a database, web service, background tasks, etc. For this we use the docker-compose command.
docker-compose uses a very simple YAML file to build multiple containers. Each container can have its own Dockerfile that customises the individual container but docker-compose will build all the containers and put them into the same virtual network.
Containers that are built with docker-compose are put into the same virtual network
Images are a snapshot of the file system, however they are always based on a another image.
A Dockerfile is a text file (usually held in the root of your project) that has the steps required to build an image. This is akin to the bash script that you would use to install software or setup environment variables. A Dockerfile looks like this:
FROM php:7.0-apache COPY config/php.ini /usr/local/etc/php/ ENV APP_ENV dev
Each line is a command. The first line is always a FROM command that specifies the base image which we build upon. Each step creates a new image, but each image only contains the changes since the last snapshot (previous command).
To run a Docker container, you:
Once you create a machine, you can reuse it as often as you like. Like any VirtualBox VM, it maintains its configuration between uses.
Docker hub is a repository of Docker Images created by the community.
The general idea of docker-machine is to give you tools to create and manage Docker clients. This means you can easily spin up a virtual machine and use that to run whatever Docker containers you want or need on it.
What’s the difference between Docker Engine and Docker Machine?
When people say “Docker” they typically mean Docker Engine, the client-server application made up of the Docker daemon, a REST API that specifies interfaces for interacting with the daemon, and a command line interface (CLI) client that talks to the daemon (through the REST API wrapper). Docker Engine accepts docker commands from the CLI, such as docker run <image>, docker ps to list running containers, docker images to list images, and so on.
Docker Machine is a tool for provisioning and managing your Dockerized hosts (hosts with Docker Engine on them). Typically, you install Docker Machine on your local system. Docker Machine has its own command line client docker-machine and the Docker Engine client, docker. You can use Machine to install Docker Engine on one or more virtual systems. These virtual systems can be local (as when you use Machine to install and run Docker Engine in VirtualBox on Mac or Windows) or remote (as when you use Machine to provision Dockerized hosts on cloud providers). The Dockerized hosts themselves can be thought of, and are sometimes referred to as, managed “machines”.
The basic execution of docker-machine is:
docker-machine create --driver virtualbox default
This will create your machine and output useful information on completion. The machine will be created with 5GB hard disk, 2 CPU's and 4GB of RAM:
$ docker-machine create development --driver virtualbox --virtualbox-disk-size "5000" --virtualbox-cpu-count 2 --virtualbox-memory "4096"
If you are using the virtualbox driver and you'ld like to forward a port to your machine, you need to do it from VirtualBox, so that they are accessible on your host machine.
VBoxManage controlvm "development" natpf1 "tcp-port8000,tcp,,8000,,8000";
There are some basic actions for a docker-machine:
$ docker-machine start development $ docker-machine stop development $ docker-machine ls
You can also generate environment variables:
$ docker-machene env development
Or access to it by ssh:
$ docker-machine ssh development
To know its ip:
$ docker-machine ip default
To list all the machines that exists:
$ docker-machine ls
Set environment variables to dictate that docker should run a command against a particular machine.
docker-machine env machinename will print out export commands which can be run in a subshell. Running docker-machine env -u will print unset commands which reverse this effect.
Set y Unset the environment variables:
$ env | grep DOCKER $ eval "$(docker-machine env dev)" $ env | grep DOCKER DOCKER_HOST=tcp://192.168.99.101:2376 DOCKER_CERT_PATH=/Users/nathanleclaire/.docker/machines/.client DOCKER_TLS_VERIFY=1 DOCKER_MACHINE_NAME=dev $ # If you run a docker command, now it will run against that host. $ eval "$(docker-machine env -u)" $ env | grep DOCKER $ # The environment variables have been unset.
When you install Docker Machine, you get a set of drivers for various cloud providers (like Amazon Web Services, Digital Ocean, or Microsoft Azure) and local providers (like Oracle VirtualBox, VMWare Fusion, or Microsoft Hyper-V).
For creating a machine without a driver:
$ docker-machine create --url=tcp://50.134.234.20:2376 custombox
Using the command docker with the next parameters you can…
images, list imagesrm remove containersrmi remove imagesstart, stop pause, unpause, change the container state.ps, list containers. -a to view all, active an inactive. -l to view the lastest.To run the image nginx:
$ docker run -d -p 80:8000 nginx
When we do something like this…
docker run -it ubuntu docker run -it java docker run -it python
… The docker service running on the host checks to see if we have a copy of the requested image locally. Which, if there isn't, checks the public registry (the docker hub) to see if there's an image named ubuntu available. If it finds it, downloads the image and stores it in its local cache of images (ready for next time) and creates a new container, based on the requested image.
If we're going to connect from one container to another, we need to link them, which tells docker that we explicitly want to allow communication between the two.
When we run a container, we can tell docker that we intend to connect to another container using the link parameter.
docker run -it -p 8123:8123 --link db:db -e DATABASE_HOST=DB users-service
$ docker run hello-world
$ docker search ubuntu
$ docker pull ubuntu
If you do a change of an image (imagine you install a package using apt) you can update it:
docker commit -m "What did you do to the image" -a "Author Name" container-id repository/new_image_name
$ docker exec -it db /bin/bash
docker exec trusting_jang cat /etc/hosts
docker ps docker ps -a
Last one shows a history.
Realize you need to write the full path:
docker run -ti -p 8000:8000 -v /home/alfred/Documents/workspaces/newskid/newskid.dockerfile/files:/var/src/newskid/files newskid
docker run --name db -d -e MYSQL_ROOT_PASSWORD=123 -p 3306:3306 mysql:latest
This starts a MySQL instance running, allowing access through port 3306 using the root password 123.
docker run tells the engine we want to run an image (the image comes at the end, mysql:latest
The last part is really important - even though that's the MySQL default port, if we don't tell docker explicitly we want to map it, it will block access through that port (because containers are isolated until you tell them you want access).
Enable MySQL libraries:
FROM php:7.0-apache RUN /usr/local/bin/docker-php-ext-install mysqli RUN /usr/local/bin/docker-php-ext-install pdo_mysql RUN apt update && apt install mysql-client-5.5 -y
To build an image from a dockerfile you use docker build . , or, docker build -f /path/to/a/Dockerfile .. You can also tag your image when building it: docker build -t davidsale/dockerizing-python-django-app . in this case the tag is called “davidsale/dockerizing-python-django-app”.
A basic dockerfile:
# FROM directive instructing base image to build upon FROM python:2-onbuild # COPY startup script into known file location in container COPY start.sh /start.sh # EXPOSE port 8000 to allow communication to/from server EXPOSE 8000 # CMD specifcies the command to execute to start the server running. CMD ["/start.sh"]
FROM indicates the image for being use. For example, FROM python:2-onbuild will use this dockerfile https://github.com/docker-library/python/blob/7663560df7547e69d13b1b548675502f4e0917d1/2.7/onbuild/Dockerfile. Which is extracted from https://hub.docker.com/_/python/.
COPY copies a file or a path inside the container, for example to copy a file: COPY start.sh /start.sh, or to copy the whole folder: COPY . /usr/src/app
To link your 8000 port with the 8000 port inside the container you use EXPOSE.
CMD[“command”] executes the main command, when it finishes the container will stop. RUN command runs a console command.Docker Compose lets you create a file which defines each container in your system, the relationships between them, and build or run them all.
You need to have a new file in the root of your project called docker-compose.yml. It's more or less like these:
version: '2'
services:
users-service:
build: ./users-service
ports:
- "8123:8123"
depends_on:
- db
environment:
- DATABASE_HOST=db
db:
build: ./test-database
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
The next commands will buld all of the images, create containers and run them in the correct order.
docker-compose build docker-compose up
The build value for each of our services in the docker-compose.yml tells docker where to go to find the Dockerfile. When we run docker-compose up, docker starts all of our services. Notice from the Dockerfile we can specify ports and dependencies.
Whe we run docker compose down we are shutting down the containers.
docker volume ls docker volume prune
version: "3"
services:
proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend
networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2"
List docker networks:
$ sudo docker network ls
Create network:
sudo docker network create mynetwork
Now you can inspect its details (you can see the assigned ip addresses to the hosts):
$ docker network inspect mynetwork
To add a docker container to a network just add the parameter:
--network mynetwork
First you need to create you own docker network:
docker network create --subnet=172.18.0.0/16 mynet123
Then run with ip:
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Additionally:
--hostname to specify a hostname --add-host to add more entries to /etc/hosts
This requires installing an SSH key for private repositories.
While this technique may work for managed cloud servers, it isn't recommended for container-based deployments for a few reasons:
use the -v switch to specify the local directory path that you wish to mount, along with the location where it should be mounted within the running container:
docker run -d -P --name <name of your container> -v /path/to/local/directory:/path/to/container/directory <image name> ...
Using this command, the host's directory becomes accessible to the container under the path you specify. This is particularly useful when developing locally, as you can use your favorite editor to work locally, commit code to Git, and pull the latest code from remote branches.
You can use the COPY command within a Dockerfile to copy files from the local filesystem into a specific directory within the container. The following Dockerfile example would recursively add the current working directory into the /app directory of the container image:
# Dockerfile for a Ruby 2.2 container FROM ruby:2.2 RUN mkdir /app COPY . /app
The ADD command is similar to the COPY command, but has the added advantage of fetching remote URLs and extracting tarballs.
FROM ubuntu:12.04 RUN apt-get update RUN apt-get install -y nginx EXPOSE 80 443 CMD ["nginx", "-g", "daemon off;"]
$ docker build . -t estars_nginx
If you are writing the dockfile probably you will prefer to do something like:
$ docker build . -t estars_nginx --no-cache
$ docker run -d -p 8000:80 estars_nginx $ docker run -ti -p 8000:80 estars_nginx
$ docker stop estars_nginx
You will need to regenerate certificates for the default machine:
docker-machine regenerate-certs default docker-machine restart default
docker rmi -f $(docker images | grep "^<none>" | awk '{print $3}')
Use this script in /var/lib/docker:
#!/bin/sh
set -e
dir="$1"
if [ -z "$dir" ]; then
{
echo 'This script is for destroying old /var/lib/docker directories more safely than'
echo ' "rm -rf", which can cause data loss or other serious issues.'
echo
echo "usage: $0 directory"
echo " ie: $0 /var/lib/docker"
} >&2
exit 1
fi
if [ "$(id -u)" != 0 ]; then
echo >&2 "error: $0 must be run as root"
exit 1
fi
if [ ! -d "$dir" ]; then
echo >&2 "error: $dir is not a directory"
exit 1
fi
dir="$(readlink -f "$dir")"
echo
echo "Nuking $dir ..."
echo ' (if this is wrong, press Ctrl+C NOW!)'
echo
( set -x; sleep 10 )
echo
dir_in_dir() {
inner="$1"
outer="$2"
[ "${inner#$outer}" != "$inner" ]
}
# let's start by unmounting any submounts in $dir
# (like -v /home:... for example - DON'T DELETE MY HOME DIRECTORY BRU!)
for mount in $(awk '{ print $5 }' /proc/self/mountinfo); do
mount="$(readlink -f "$mount" || true)"
if dir_in_dir "$mount" "$dir"; then
( set -x; umount -f "$mount" )
fi
done
# now, let's go destroy individual btrfs subvolumes, if any exist
if command -v btrfs > /dev/null 2>&1; then
root="$(df "$dir" | awk 'NR>1 { print $NF }')"
root="${root#/}" # if root is "/", we want it to become ""
for subvol in $(btrfs subvolume list -o "$root/" 2>/dev/null | awk -F' path ' '{ print $2 }' | sort -r); do
subvolDir="$root/$subvol"
if dir_in_dir "$subvolDir" "$dir"; then
( set -x; btrfs subvolume delete "$subvolDir" )
fi
done
fi
# finally, DESTROY ALL THINGS
( set -x; rm -rf "$dir" )
From the documentation:
The docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The docker daemon always runs as the root user. If you don’t want to use sudo when you use the docker command, create a Unix group called docker and add users to it. When the docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.
The steps would be:
$ sudo groupadd docker $ sudo usermod -aG docker $USER
If it did not work:
sudo groupadd docker sudo usermod -aG docker $USER newgrp docker docker run hello-world
You can control the docker service with: service docker start|stop|restart|status.
You also can start it manually with the dockerd command as sudo user.
For configuring it, in /etc/docker/ folder there should be a daemon.json file. This is the config documentation for that: https://docs.docker.com/engine/reference/commandline/dockerd/#miscellaneous-options
Install MySQL on setting the password:
FROM php:7.0-apache
RUN { \
echo mysql-server mysql-server/root_password password mypassword; \
echo mysql-server mysql-server/root_password_again password mypassword; \
} | debconf-set-selections
RUN apt update && apt install -y mysql-server