Deploying a Django APP to AWS EC2 using Terraform and Ansible
Deploying a Django application to AWS can be a complex and time-consuming process, but with the help of Terraform and Ansible, it can be made much simpler. In this tutorial, we will walk through the steps necessary to deploy a Django application to AWS using Terraform and Ansible.
AWS
AWS provides a highly scalable and reliable infrastructure that can be used to host websites and web applications, run data processing and analysis jobs, store and manage data, and much more. AWS also offers a variety of tools and services for managing and monitoring your infrastructure, including automation tools and APIs that enable you to integrate AWS services into your own applications and workflows.
We'll be using AWS to manage our infrastructure with Terraform. The first step is to set up your AWS account and create an access key and secret access key that Terraform and Ansible can use to authenticate with your AWS account.
- Create an AWS account: Go to the AWS website and create a new AWS account if you don't already have one.
- Create an IAM user: IAM (Identity and Access Management) is a service that enables you to manage users and their permissions. Create a new IAM user by going to the IAM console and following the steps to create a new user. Assign appropriate permissions to the user based on the tasks it needs to perform.
- Generate an access key and secret access key: you can generate an access key and secret access key. These keys are used to authenticate with your AWS account from Terraform and Ansible. Go to the IAM console, select the user you created in step 2, and then click on the "Security Credentials" tab. Click on "Create Access Key", and then download the generated keys.
- Set up environment variables: To enable Terraform and Ansible to use the access and secret access keys, you need to set them as environment variables on your local machine. Set the following environment variables:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- Verify access: Test that Terraform and Ansible can authenticate with your AWS account by running a simple command that lists the available AWS resources. For example, you can Run the following command in your terminal:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
aws ec2 describe-instances --region=us-west-1
You should see a list of your EC2 instances if everything is set up correctly.
Terraform
Terraform is an open-source infrastructure as code (IAC) tool that allows users to define and manage their infrastructure in a declarative language. Terraform become a widely used tool for managing infrastructure across cloud providers, on-premises data centers, high-level components like DNS, and more.
With Terraform, users can define their infrastructure using a simple syntax. Terraform will create, modify, and destroy resources as necessary to ensure that the infrastructure is always in the desired state.
The next step is to create a Terraform configuration file that describes the infrastructure to create on AWS. This will involve specifying the type and size of the EC2/RDS instances that you want to use, the required networking configuration, and the DNS configuration.
For this post, we are using the free version of EC2 and RDS. Also, a paid domain from Goddady.
How it works
Overall, Terraform simplifies the process of managing infrastructure by providing a consistent, automated way to manage resources across cloud providers and environments. In Terraform, a file with the extension ".tf" is a configuration file that contains the infrastructure as code (IAC) definitions for creating and managing resources within a specific provider, such as AWS.
The project structure
terraform/
main.tf
variables.tf
permissions.tf
security_groups.tf
network.tf
main.tf
On the main.tf file, we are specifying the main components of our infrastructure. An EC2 instance to host our Django app and an RDS instance to use as storage. Don't worry about all the configurations you don't find on this file. We will go through each one in detail.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
godaddy = {
source = "n3integration/godaddy"
version = "~> 1.9.1"
}
}
required_version = ">= 1.3.0"
}
provider "aws" {
region = "us-west-1"
}
provider "godaddy" {}
# EC2 INSTANCE
resource "aws_instance" "django_boilerplate_webserver" {
ami = "ami-0db245b76e5c21ca1"
instance_type = "t2.micro"
associate_public_ip_address = true
key_name = aws_key_pair.django_boilerplate_private_ssh_key_pair.key_name
subnet_id = aws_subnet.public_subnet_1.id
tags = {
Name = "Django Boilerplate"
}
availability_zone = var.AWS_AVAILABILITY_ZONES[0]
security_groups = [
aws_security_group.allow_ssh.id,
aws_security_group.allow_https.id,
aws_security_group.allow_all_outbound_connections.id
]
}
# RDS INSTANCE
resource "aws_db_instance" "django_boilerplate" {
identifier = "django-boilerplate"
db_name = "django_boilerplate"
username = "root"
password = var.AWS_DB_PASSWORD_DJANGO_BOILERPLATE
tags = {
Name = "Django Boilerplate"
}
availability_zone = var.AWS_AVAILABILITY_ZONES[0]
db_subnet_group_name = aws_db_subnet_group.main_private_db_subnet_group.name
vpc_security_group_ids = [
aws_security_group.allow_rds_postgres_connection.id,
aws_security_group.allow_all_outbound_connections.id
]
port = "5432"
engine = "postgres"
engine_version = "14.6"
instance_class = "db.t4g.micro"
allocated_storage = "20" # GB
storage_type = "gp2"
publicly_accessible = false
skip_final_snapshot = true
backup_retention_period = 7
}
- Set terraform configuration so that it knows which providers/services are being used for this project. In our case, it's `AWS` and `Godaddy`.
- Set each provider (even if there is no config)
- Configure an EC2 instance
- `ami` is the machine's ID we'll use. The current ID is for Ubuntu 20.04.6 LTS
- `key_name` is the public SSH key
- `subnet_id` the network subnet that this instance is attached to. This is used as a way to allow the EC2 instance to connect to the internet and to the RDS instance.
- `availability_zone` is where to create the instance
- `security_groups` is how we configure the permissions to access the machine
- Configuring an RDS instance
- `availability_zone` is where to create the instance
- `db_subnet_group_name`, the DB network subnet attached to this instance. This is used to open a way for the EC2 instance to communicate with the database.
- `vpc_security_group_ids` sets the rules to allow access to the Postgres database.
variables.tf
data "http" "MY_IP" {
url = "https://ipv4.icanhazip.com"
}
variable "AWS_AVAILABILITY_ZONES" {
description = "Availability zones"
type = list(string)
default = ["us-west-2b", "us-west-2c"]
}
variable "AWS_DB_PASSWORD_DJANGO_BOILERPLATE" {
description = "Password of the 'DjangoBoilerplate' DB"
type = string
sensitive = true
}
- Get and set the "MY_IP" variable so that we can ensure that only you can access the AWS resources
- Set `AWS_AVAILABILITY_ZONES`; for our case, it's required two different zones.
- Set the `AWS_DB_PASSWORD_DJANGO_BOILERPLATE` variable.
- Later, we'll set this variable on our shell.
permissions.tf
resource "tls_private_key" "django_boilerplate_private_ssh_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "local_sensitive_file" "django_boilerplate_private_ssh_key" {
filename = pathexpand("~/.ssh/django_boilerplate_private_ssh_key.pem")
file_permission = "600"
directory_permission = "700"
content = tls_private_key.django_boilerplate_private_ssh_key.private_key_pem
}
resource "aws_key_pair" "django_boilerplate_private_ssh_key_pair" {
key_name = "django_boilerplate_private_ssh_key"
public_key = tls_private_key.django_boilerplate_private_ssh_key.public_key_openssh
tags = {
Name = "Django Boilerplate"
}
}
- Create a new `django_boilerplate_private_ssh_key` TLS Private Key.
- Create on your machine a new private SSH Key `django_boilerplate_private_ssh_key`
- Create a new AWS KEY Pair with the TLS public key.
- AWS Key pair is a set of security credentials consisting of public and private keys. This key pair is used to securely access and manage EC2 instances and other resources within an AWS account.
security_groups.tf
Create the necessary rules to apply on the RDS/EC2 instances to allow the inbound and outbound connections internally (from the VPC) or externally (from the Internet)
- Create a rule to `allow_all_outbound_connections`
- Create a rule to `allow_ssh`
- Create a rule to `allow_https`
- Create a rule to `allow_rds_postgres_connection`
resource "aws_security_group" "allow_all_outbound_connections" {
name = "Allow all outbound connections"
vpc_id = aws_vpc.main_vpc.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "allow_ssh" {
name = "Allow SSH only for local IP"
vpc_id = aws_vpc.main_vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"${chomp(data.http.MY_IP.response_body)}/32",
var.VPN_IP_ADDRESS
]
}
egress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"${chomp(data.http.MY_IP.response_body)}/32",
var.VPN_IP_ADDRESS
]
}
}
resource "aws_security_group" "allow_https" {
name = "Allow all traffic through HTTP"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "https"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "allow_rds_postgres_connection" {
name_prefix = "allow-rds-connection"
vpc_id = aws_vpc.main_vpc.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
network.tf
VPC
An Amazon Virtual Private Cloud (VPC) is a virtual network infrastructure that allows AWS customers to launch and isolate resources within a private, secure network.
AWS VPC provides a range of networking features, including creating subnets, configuring route tables, setting up network gateways, and defining security settings. With VPC, users can create custom network topologies and configure networking options, such as IP addresses, subnets, and routing, to suit their specific requirements.
VPC enables users to create and manage their own virtual network within the AWS cloud, allowing them to run resources such as EC2 instances, databases, and other services while maintaining high-level control over their network traffic and security.
- Create a main VPC to connect all AWS resources for this project.
resource "aws_vpc" "main_vpc" {
cidr_block = "172.16.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "Django Boilerplate"
}
}
Subnets
Subnets provide a way to divide a VPC into smaller, more manageable networks, allowing users to group resources based on their function or security requirements. Each subnet is associated with a specific Availability Zone within a region, and resources within a subnet can communicate with each other using private IP addresses.
# SUBNETS
resource "aws_subnet" "public_subnet_1" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = cidrsubnet(aws_vpc.main_vpc.cidr_block, 8, 1) # "172.16.1.0/24"
availability_zone = var.AWS_AVAILABILITY_ZONES[0]
tags = {
Name = "Django Boilerplate"
}
}
resource "aws_subnet" "private_subnet_1" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = cidrsubnet(aws_vpc.main_vpc.cidr_block, 8, 2) # "172.16.2.0/24"
availability_zone = var.AWS_AVAILABILITY_ZONES[0]
tags = {
Name = "Django Boilerplate"
}
}
resource "aws_subnet" "private_subnet_2" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = cidrsubnet(aws_vpc.main_vpc.cidr_block, 8, 3) # "172.16.3.0/24"
availability_zone = var.AWS_AVAILABILITY_ZONES[1]
tags = {
Name = "Django Boilerplate"
}
}
resource "aws_db_subnet_group" "main_private_db_subnet_group" {
name = "main_private_db_subnet_group"
subnet_ids = [aws_subnet.private_subnet_1.id, aws_subnet.private_subnet_2.id]
description = "Subnet group to connect RDS and EC2 instances"
tags = {
Name = "Django Boilerplate"
}
}
- Create a Public Subnet to allow internet connection between the InternetGateway and EC2 instance.
- Creates two private Subnects to connect the EC2 to the RDS instance. For some reason, the database requires two PrivateSubnects from different regions.
- Creates a `main_private_db_subnet_group` to allow the RDS instance to communicate with the EC2 instance
IP Address
Amazon Elastic IP address (EIP) is a static, public IP address that can be allocated to an Amazon Web Services (AWS) account and associated with an instance, a network interface, or a NAT gateway.
resource "aws_eip" "django_boilerplate_ip" {
instance = aws_instance.django_boilerplate_webserver.id
vpc = true
tags = {
Name = "Django Boilerplate"
}
}
- Create a static IP address for `django_boilerplate_ip`. That way, the IP address won't change if the instance is restarted.
Gateways
A gateway is a network component that provides connectivity between different networks or services. AWS provides several types of gateways to support different use cases. For our case, we are using only 2:
- Internet Gateway (IGW): A horizontally scaled, redundant gateway that provides access to the internet for resources within a VPC. Used to allow EC2 instances to connect to the internet.
- NAT Gateway: A managed service that enables resources within a private subnet to access the internet while maintaining higher security. This will be used to connect the EC2 and RDS instances.
resource "aws_internet_gateway" "main-gateway" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "Django Boilerplate"
}
}
resource "aws_nat_gateway" "private_nat_gateway" {
connectivity_type = "private"
subnet_id = aws_subnet.public_subnet_1.id
tags = {
Name = "Django Boilerplate"
}
}
- Create an Internet Gateway to allow the VPC to connect to the internet
- Create a NAT Gateway to allow the RDS instance to communicate with EC2.
Route Tables
A route table is a logical construct that contains a set of rules (or "routes") that determine how network traffic is directed within a Virtual Private Cloud (VPC). AWS route tables are used to direct traffic between subnets within the same VPC, as well as between VPCs or to the internet.
resource "aws_route_table" "internet_route_table" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "Django Boilerplate"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main-gateway.id
}
}
resource "aws_route_table" "database_route_table" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "Django Boilerplate"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.private_nat_gateway.id
}
}
- Create a Route Table `internet_route_table` to configure how inbound/outbound requests should move between the resources (e.g., internet Gateway and EC2)
- Create a Route Table `database_route_table` to configure how inbound/outbound connections should traffic.
Route Tables Association
Each subnet within a Virtual Private Cloud (VPC) must be associated with a route table specifying how network traffic is directed to and from the subnet. Route table association refers to the process of associating a subnet with a route table.
resource "aws_route_table_association" "add_internet_to_public_subnet_1" {
route_table_id = aws_route_table.internet_route_table.id
subnet_id = aws_subnet.public_subnet_1.id
}
resource "aws_route_table_association" "add_database_connection_to_public_subnet_1" {
route_table_id = aws_route_table.database_route_table.id
subnet_id = aws_subnet.private_subnet_1.id
}
- Create a Route Table Association to `add_internet_to_public_subnet_1`
- Create a Route Table Association to `add_database_connection_to_public_subnet_1`
DNS
Terraform can automatically set up the DNS server with the current created EC2 instance. To do that, we only need to configure a new `godaddy_domain_record` (if you are using Goddady).
resource "godaddy_domain_record" "godaddy_django_boilerplate" {
domain = "djangoboilerplate.org"
record {
name = "@"
type = "A"
data = aws_eip.django_boilerplate_ip.public_ip
ttl = 600
priority = 0
}
record {
name = "www"
type = "CNAME"
data = "@"
ttl = 3600
priority = 0
}
}
- Configure a new API Key and Secret on Goddady to allow connections.
- `$ export GODADDY_API_KEY=`
- `$ export GODADDY_API_SECRET=`
- Create a new Godaddy config, `godaddy_django_boilerplate`, specifying the EC2 instance and the domain name.
outputs.tf
output "instance_id" {
description = "ID of the EC2 instance"
value = aws_instance.django_boilerplate_webserver.id
}
output "instance_public_ip" {
description = "Public IP address of the EC2 instance"
value = aws_eip.django_boilerplate_ip.public_ip
}
output "db_instance_host" {
description = "Public IP address of the RDS instance"
value = aws_db_instance.django_boilerplate.address
}
output "project_ssh_key" {
description = "A new key was generated on ~/.ssh/ to access the EC2 with SSH"
value = "Saved at ${local_sensitive_file.django_boilerplate_private_ssh_key.filename}"
}
After applying the changes, Terraform will print all output variables to the console. This is useful if you need to copy-paste to another place, like Ansible.
Resuming everything
- VPC: Is to connect all resources and subnets
- Main Gateway: To access the internet
- Subnets: To connect all services/instances/gateways
- Route Tables: To configure how inbound/outbound requests should move between the resources (e.g., Internet Gateway)
- Route Table Association: To connect a RouteTable to a Subnet
- Security Groups (SSH): To secure that SSH is available through port 22
- SecurityGroups (HTTPS): To allow connections on ports 80 and 443
- KeyPair: To connect to the EC2 instance
Ansible
Ansible is an open-source IT automation tool that allows you to automate the deployment and management of your projects. When it comes to deploying a Django application, Ansible provides a simple and efficient way to automate the process.
With Ansible, you can easily provision servers, install software dependencies, configure settings, and deploy your Django application to production servers. This reduces time managing servers, reduces the chance of human error, and makes managing your Django application infrastructure easier.
How it works
Ansible works by using a client-server architecture, where the Ansible control node communicates with the managed hosts over SSH or WinRM protocols. You can write playbooks, which are YAML files, that define the tasks and configuration settings that you want to apply to your managed hosts.
When you run an Ansible playbook, it connects to the managed hosts and executes the defined tasks in order. Ansible uses modules to perform specific actions, such as installing packages and copying files.
One of the key features of Ansible is its idempotence, which means that running a playbook multiple times will always result in the same desired state, even if some tasks have already been executed. This ensures that the configuration of your managed hosts remains consistent and reduces the risk of errors or unexpected changes.
Check out what the project folder looks like:
ansible/
inventory
playbook_setup_ubuntu.yml
playbook_setup_django.yml
playbook_setup_webservers.yml
And details about some of the Ansible Playbooks tags:
- `hosts`: The hosts where the actions will be performed
- `vars`: Local variables
- `tasks`: Each task performed by Ansible to run on the hosts
- `become`: Run the command as sudo
- `tags`: A way to categorize tasks only to run specific tasks
Inventory
The Ansible inventory file is a text file that defines the hosts and groups of hosts that Ansible will manage. The inventory file contains information such as the hostnames or IP addresses of the managed hosts, connection details such as SSH keys or credentials, and additional metadata such as host variables and group variables.
[webservers]
django_boilerplate ansible_host=44.235.42.69
[webservers:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=~/.ssh/django_boilerplate_private_ssh_key.pem
[all:vars]
ansible_python_interpreter=/usr/bin/python3.8
- Set the IP address printed by Terraform
- Set the user and SSH private key
- Set the Python path used by Ansible to perform all actions
playbook_setup_ubuntu.yml
This playbook is used to set up initial configurations for the EC2 instance.
---
- name: Setup Ubuntu initial configuration
hosts: [django_boilerplate]
vars:
project_local_path: "{{ playbook_dir }}/../../../django_boilerplate"
project_path: /home/ubuntu/django_boilerplate
virtualenv_path: "{{ project_path }}/venv"
tasks:
- name: Update and upgrade Apt
become: true
apt:
update_cache: yes
upgrade: yes
tags: ['setup', 'update_packages']
- name: Install APT Packages
become: true
apt: name={{ item }} update_cache=yes state=latest
loop: [
# Tools
'git',
'vim',
'wget',
'less',
'fish',
'tmux'
]
tags: ['setup', 'update_packages']
- name: Change default shell to fish
become: true
shell: chsh -s /usr/bin/fish ubuntu
tags: ['setup']
- name: Set tmux config file
copy:
src: "{{ project_local_path }}/deployment/config_files/tmux.conf"
dest: "~/.tmux.conf"
tags: ['setup', 'update']
playbook_setup_django.yml
This playbook is used to set up all configurations to run a Django application on an EC2 instance.
---
- name: Deploy Django Application
hosts: [general_purposes_webserver]
vars:
database_name: "django_boilerplate"
project_path_local: "{{ playbook_dir }}/../../../django_boilerplate"
project_path: "/home/ubuntu/django_boilerplate"
virtualenv_path: "{{ project_path }}/venv"
tasks:
- name: Update and upgrade Apt
become: true
apt:
update_cache: yes
upgrade: yes
tags: ['setup', 'update_packages', 'deploy']
- name: Install APT Packages
become: true
apt: name={{ item }} update_cache=yes state=latest
loop: [
# Build Deps
'g++',
'libffi-dev',
'gnupg2',
'build-essential',
'libpq-dev',
'postgresql-client',
'python3.9',
'python3-dev',
'python3.9-dev',
'python3-pip',
'python3.9-venv',
'python3-testresources',
'python-is-python3',
'python-dev-is-python3',
'libpython3.9',
]
tags: ['setup', 'update_packages', 'deploy']
- name: Create project directory
file:
path: "{{ project_path }}"
state: directory
tags: ['setup']
- name: Copy project files
synchronize:
src: "{{ project_path_local }}"
dest: "{{ project_path }}/.."
vars:
rsync_opts: [--exclude=.git*]
tags: ['setup', 'deploy', 'quick_deploy']
- name: Configure Django environment settings
copy:
remote_src: True
src: "{{ project_path }}/deployment/config_files/.env.production"
dest: "{{ project_path }}/.env"
tags: ['setup', 'deploy', 'quick_deploy']
- name: Create virtual environment
shell: python3.9 -m venv {{ virtualenv_path }}
tags: ['setup', 'deploy', 'quick_deploy']
- name: Activate virtual environment
shell: . {{ virtualenv_path }}/bin/activate
tags: ['setup', 'deploy', 'quick_deploy']
- name: Upgrade pip
shell: "{{ virtualenv_path }}/bin/pip install --upgrade pip"
tags: ['setup']
- name: Install pip requirements
shell: "{{ virtualenv_path }}/bin/pip install -r {{ project_path }}/requirements.txt"
tags: ['setup', 'deploy', 'quick_deploy']
- name: Collect static files
shell: "{{ virtualenv_path }}/bin/python {{ project_path }}/manage.py collectstatic --noinput"
tags: ['setup', 'deploy', 'quick_deploy']
- name: Run Migrations
shell: "{{ virtualenv_path }}/bin/python {{ project_path }}/manage.py migrate"
tags: ['setup', 'deploy', 'quick_deploy']
- name: Restart Gunicorn
become: true
systemd:
name: gunicorn
state: restarted
tags: ['deploy', 'quick_deploy']
playbook_setup_webservers.yml
This playbook is used to set up all configurations to run Nginx and Gunicorn servers on an EC2 instance.
---
- name: Setup Webservers
hosts: [general_purposes_webserver]
become: true
vars:
project_name: django_boilerplate
local_project_path: "{{ playbook_dir }}/../../../{{ project_name }}"
tasks:
# Configure Nginx
- name: Update and upgrade Apt
apt:
update_cache: yes
upgrade: yes
tags: ['setup', 'update_packages']
- name: Install APT Packages
apt: name={{ item }} update_cache=yes state=latest
loop: [
'nginx',
'supervisor',
'certbot',
]
tags: ['setup', 'update_packages']
- name: Delete default nginx site
file:
path: /etc/nginx/sites-available/default
state: absent
tags: ['setup']
- name: Delete default nginx site
file:
path: /etc/nginx/sites-enabled/default
state: absent
tags: ['setup']
- name: Copy default Nginx files config
synchronize:
src: ../config_files/nginx
dest: /etc/
recursive: true
perms: true
tags: ['setup', 'update']
- name: Check SSL Certificate file status
become: true
stat:
path: "/etc/letsencrypt/live/djangoboilerplate.org/fullchain.pem"
register: ssl_certificate_file_status
tags: [ 'setup' ]
- name: Comment directives related to SSL
command: 'sed -i -r "s/(listen .*443)/\1; #/g; s/(ssl_(certificate|certificate_key|trusted_certificate) )/#;#\1/g; s/(server \{)/\1\n ssl off;/g" /etc/nginx/sites-available/djangoboilerplate.org.conf'
when: not ssl_certificate_file_status.stat.exists
tags: ['setup']
- name: Restart nginx
ansible.builtin.service:
name: nginx
state: restarted
when: not ssl_certificate_file_status.stat.exists
tags: ['setup']
- name: Create letsencrypt directory
file:
path: /var/www/_letsencrypt
state: directory
when: not ssl_certificate_file_status.stat.exists
tags: ['setup']
- name: Run certbot to obtain certificates
command: certbot certonly --webroot -d djangoboilerplate.org --email info@djangoboilerplate.org -w /var/www/_letsencrypt -n --agree-tos --force-renewal
when: not ssl_certificate_file_status.stat.exists
tags: [ 'setup' ]
- name: Uncomment directives related to SSL
command: 'sed -i -r -z "s/#?; ?#//g; s/(server \{)\n ssl off;/\1/g" /etc/nginx/sites-available/djangoboilerplate.org.conf'
when: not ssl_certificate_file_status.stat.exists
tags: ['setup']
- name: Check Nginx dhparam.pem file status
become: true
stat:
path: "/etc/nginx/dhparam.pem"
register: nginx_dhparam_file_status
tags: ['setup']
- name: Generate openssl dhparam for Nginx
command: 'openssl dhparam -out /etc/nginx/dhparam.pem 2048'
when: not nginx_dhparam_file_status.stat.exists
tags: ['setup']
# Configure Gunicorn
- name: Copy gunicorn socket
copy:
src: "{{ local_project_path }}/deployment/config_files/gunicorn.socket"
dest: "/etc/systemd/system/gunicorn.socket"
tags: ['setup', 'update']
- name: Copy gunicorn service
copy:
src: "{{ local_project_path }}/deployment/config_files/gunicorn.service"
dest: "/etc/systemd/system/gunicorn.service"
tags: ['setup', 'update']
# Restart Services
- name: Reload systemd configuration
systemd:
daemon-reload: yes
tags: ['setup', 'update']
- name: Restart nginx
systemd:
name: nginx
state: restarted
tags: ['setup', 'update']
- name: Restart Gunicorn
systemd:
name: gunicorn
state: restarted
tags: ['setup', 'update']
$ cd terraform
$ terraform apply
Ansible
$ cd ansible
$ ansible-playbook -i inventory playbook_setup_ubuntu.yml playbook_setup_django.yml playbook_setup_webservers.yml --tags="setup"
Conclusion
Infrastructure as a code (IAC) may appear scary initially. Terraform requires a lot of initial configuration, and we have to create all network infrastructure manually. Also, some of the services/resources are not easy to understand at first. But with time and practice, we can create the mental map to create all infrastructure required to deploy Django Applications into AWS.