Commit 9e01fcdf by Jason Bau

Merge branch 'master' into jbau/lms-preview

parents 793a6372 905bffd0
# Configuration Management # Configuration Management
## Introduction
**This project is currently in alpha**
The goal of the edx/configuration project is to provide a simple, but
flexible, way for anyone to stand up an instance of the edX platform
that is fully configured and ready-to-go.
Building the platform takes place to two phases:
* Infrastruce provisioning
* Service configuration
As much as possible, we have tried to keep a clean distinction between
provisioning and configuration. You are not obliged to use our tools
and are free to use one, but not the other. The provisioing phase
stands-up the required resources and tags them with role identifiers
so that the configuration tool can come in and complete the job.
The reference platform is provisioned using an Amazon
[CloudFormation](http://aws.amazon.com/cloudformation/) template.
When the stack has been fully created you will have a new AWS Virtual
Private Cloud with hosts for the core edX services. This template
will build quite a number of AWS resources that cost money, so please
consider this before you start.
The configuration phase is manged by [Ansible](http://ansible.cc/).
We have provided a number of playbooks that will configure each of
the edX service.
This project is a re-write of the current edX provisioning and
configuration tools, we will be migrating features to this project
over time, so expect frequent changes.
## AWS ## AWS
### Building the stack
The first step is to provision the CloudFormation stack. There are
several options for doing this.
* The [AWS console](https://console.aws.amazon.com/cloudformation/home)
* The AWS [CloudFormation CLI](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-installing-cli.html)
* Via Ansible
If you don't have experience with CloudFormation, the web console is a
good place to start because it will use a form wizard to gather
configuration parameters, it will give you continuous feedback during
the process of building the stack and useful error messages when
problems occur.
Details on how to build the stack using Ansible are available below.
### Connecting to Hosts in the Stack
Because the reference architecture makes use of an Amazon VPC, you will not be able
to address the hosts in the private subnets directly. However, you can easily set
up a transparent "jumpbox" so that for all hosts in your vpc, connections are
tunneled.
Add something like the following to your `~/.ssh/config` file.
```
Host *.us-west-1.compute-internal
ProxyCommand ssh -W %h:%p vpc-00000000-jumpbox
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
Host vpc-00000000-jumpbox
HostName 54.236.224.226
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
```
This assumes that you only have one VPC in the ```us-west-1``` region
that you're trying to ssh into. Internal DNS names aren't qualified
any further than that, so to support multiple VPC's you'd have to get
creative with subnets, for example ip-10-1 and ip-10-2...
Test this by typing `ssh ip-10-0-10-1.us-west-1.compute.internal`,
(of course using a hostname exists in your environment.) If things
are configured correctly you will ssh to 10.0.10.1, jumping
transparently via your basion host.
Getting this working in important because we'll be using Ansible
with the SSH transport and it will rely on this configuration
being in place in order to configure your servers.
### Tagging ### Tagging
Every AWS EC2 instance will have a *Group* tag that corresponds to a group of Tagging is the bridge between the provisioning and configuration
machines that need to be deployed/targetted to as a group of servers. phases. The servers provisioned in your VPC will be stock Ubuntu
12.0.4 LTS servers. The only difference between them with be the tags
that CloudFront has applied to them. These tags will be used by Ansible
to map playbooks to the correct servers. The application of the
appropriate playbook, will turn each stock host into an appropriately
configured service.
The *Group* tag is where the magic happens. Every AWS EC2 instance
will have a *Group* tag that corresponds to a group of machines that
need to be deployed/targeted to as a group of servers.
**Example:** **Example:**
* `Group`: `edxapp_stage` * `Group`: `edxapp_stage`
...@@ -31,9 +129,9 @@ version instead of the official v1.1 release._ ...@@ -31,9 +129,9 @@ version instead of the official v1.1 release._
specific variables. specific variables.
* __Groups__ - A Group name is an identifier that corresponds to a group of * __Groups__ - A Group name is an identifier that corresponds to a group of
roles plus an identifier for the environment. Example: *edxapp_stage*, roles plus an identifier for the environment. Example: *edxapp_stage*,
*edxapp_prod*, *xserver_stage*, etc. For the purpose of targetting servers *edxapp_prod*, *xserver_stage*, etc. For the purpose of targeting servers
for deployment groups are created automatically by the `ec2.py` inventory for deployment groups are created automatically by the `ec2.py` inventory
sript since these group names will map to the _Group_ AWS tag. script since these group names will map to the _Group_ AWS tag.
* __Roles__ - A role will map to a single function/service that runs on * __Roles__ - A role will map to a single function/service that runs on
server. server.
...@@ -44,9 +142,9 @@ version instead of the official v1.1 release._ ...@@ -44,9 +142,9 @@ version instead of the official v1.1 release._
As a general policy we want to protect the following data: As a general policy we want to protect the following data:
* Usernames * Usernames
* Public keys (keys are ok to be public, but can be used to figure out usernames) * Public keys (keys are OK to be public, but can be used to figure out usernames)
* Hostnames * Hostnames
* Passwords, api keys * Passwords, API keys
The following yml files and examples serve as templates that should be overridden with your own The following yml files and examples serve as templates that should be overridden with your own
environment specific configuration: environment specific configuration:
...@@ -54,7 +152,7 @@ environment specific configuration: ...@@ -54,7 +152,7 @@ environment specific configuration:
* vars in `secure_example/vars` * vars in `secure_example/vars`
* files in `secure_example/files` * files in `secure_example/files`
Directory structure for the secure repo: Directory structure for the secure repository:
``` ```
...@@ -166,6 +264,8 @@ playbooks ...@@ -166,6 +264,8 @@ playbooks
#### Provision the stack #### Provision the stack
**This assumes that you have workng ssh as described above**
``` ```
cd playbooks cd playbooks
ansible-playbook -vvv cloudformation.yml -i inventory.ini -e 'region=<aws_region> key=<key_name> name=<stack_name> group=<group_name>' ansible-playbook -vvv cloudformation.yml -i inventory.ini -e 'region=<aws_region> key=<key_name> name=<stack_name> group=<group_name>'
...@@ -211,36 +311,6 @@ If that works fine, then you can add an export of PYTHONPATH to ...@@ -211,36 +311,6 @@ If that works fine, then you can add an export of PYTHONPATH to
* Creates base directories * Creates base directories
* Creates the lms json configuration files * Creates the lms json configuration files
Because the reference architecture makes use of an Amazon VPC, you will not be able
to address the hosts in the private subnets directly. However, you can easily set
up a transparent "jumpbox" so that for all hosts in your vpc, connections are
tunneled.
Add something like the following to your `~/.ssh/config` file.
```
Host *.us-west-1.compute-internal
ProxyCommand ssh -W %h:%p vpc-00000000-jumpbox
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
Host vpc-00000000-jumpbox
HostName 54.236.224.226
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
```
This assumes that you only have one VPC in the ```us-west-1``` region
that you're trying to ssh into. Internal DNS names aren't qualified
any further than that, so to support multiple VPC's you'd have to get
creative with subnets, for example ip-10-1 and ip-10-2...
Test this by typing `ssh ip-10-0-10-1.us-west-1.compute.internal`,
(of course using a hostname exists in your environment.) If things
are configured correctly you will ssh to 10.0.10.1, jumping
transparently via your basion host.
Assuming that the edxapp_stage.yml playbook targets hosts in your vpc Assuming that the edxapp_stage.yml playbook targets hosts in your vpc
for which there are entiries in your `.ssh/config`, do the for which there are entiries in your `.ssh/config`, do the
......
--- ---
app_base_dir: /opt/wwc app_base_dir: /opt/wwc
log_base_dir: /mnt/logs
venv_dir: /opt/edx venv_dir: /opt/edx
# these pathes are relative to the playbook dir # these pathes are relative to the playbook dir
...@@ -7,4 +8,4 @@ venv_dir: /opt/edx ...@@ -7,4 +8,4 @@ venv_dir: /opt/edx
secure_dir: 'secure_example' secure_dir: 'secure_example'
# this indicates the path to site-specific (with precedence) # this indicates the path to site-specific (with precedence)
# things like nginx template files # things like nginx template files
local_dir: '../../ansible_local' local_dir: '../../ansible_local'
\ No newline at end of file
---
# this path is relative to the playbook dir
secure_dir: '../../configuration-secure/ansible'
---
edxapp_prod: true
secure_dir: '../../configuration-secure/ansible'
\ No newline at end of file
...@@ -19,5 +19,15 @@ ...@@ -19,5 +19,15 @@
- include: ../../nginx/tasks/nginx_site.yml state=link site_name=cms-backend - include: ../../nginx/tasks/nginx_site.yml state=link site_name=cms-backend
- name: Create CMS log target directory
file: path={{log_base_dir}}/cms state=directory owner=syslog group=adm mode=2770
tags:
- cms
- cms-env
- logging
# If we set up CMS, we have to set up edx logging
- include: ../../common/tasks/edx_logging_base.yml
# Creates LMS upstart file # Creates LMS upstart file
- include: ../../gunicorn/tasks/upstart.yml service_variant=cms - include: ../../gunicorn/tasks/upstart.yml service_variant=cms
...@@ -26,7 +26,7 @@ cms_env_config: ...@@ -26,7 +26,7 @@ cms_env_config:
'KEY_FUNCTION': 'util.memcache.safe_key' 'KEY_FUNCTION': 'util.memcache.safe_key'
'KEY_PREFIX': 'cms.edx.org' 'KEY_PREFIX': 'cms.edx.org'
'LOCATION': [ "deploycache-large.foo-bar.amazonaws.com:11211" ] 'LOCATION': [ "deploycache-large.foo-bar.amazonaws.com:11211" ]
'LOG_DIR': '/mnt/logs/edx' 'LOG_DIR': '{{log_base_dir}}/edx'
'LOGGING_ENV': 'cms-dev' 'LOGGING_ENV': 'cms-dev'
'SITE_NAME': 'studio.cms-dev.m.edx.org' 'SITE_NAME': 'studio.cms-dev.m.edx.org'
'SYSLOG_SERVER': 'syslog.a.m.i4x.org' 'SYSLOG_SERVER': 'syslog.a.m.i4x.org'
......
#!/bin/bash
function usage() {
echo "update.sh [cms|lms|all|none]"
echo " option is what services to collectstatic and restart (default=all)"
}
if [ $# -gt 1 ]; then
usage
exit 1
fi
if [ $# == 0 ]; then
do_CMS=1
do_LMS=1
else
case $1 in
cms)
do_CMS=1
;;
lms)
do_LMS=1
;;
both|all)
do_CMS=1
do_LMS=1
;;
none)
;;
*)
usage
exit 1
;;
esac
fi
function run() {
echo
echo "======== $@"
$@
}
source /etc/profile
source /opt/edx/bin/activate
export PATH=$PATH:/opt/www/.gem/bin
cd /opt/wwc/mitx
BRANCH="origin/feature/edx-west/stanford-theme"
export GIT_SSH="/tmp/git_ssh.sh"
run git fetch origin -p
run git checkout $BRANCH
if [[ $do_CMS ]]; then
export SERVICE_VARIANT=cms
run rake cms:gather_assets:aws
run sudo restart cms
fi
if [[ $do_LMS ]]; then
export SERVICE_VARIANT=lms
run rake lms:gather_assets:aws
run sudo restart lms
fi
---
- name: restart rsyslogd
service: name=rsyslog state=restarted
sudo: True
---
- name: Install rsyslog configuration for edX
template: dest=/etc/rsyslog.d/99-edx.conf src=edx_rsyslog.j2 owner=root group=root mode=644
notify: restart rsyslogd
tags:
- logging
- name: Install logrotate configuration for edX
template: dest=/etc/logrotate.d/edx-services src=edx_logrotate.j2 owner=root group=root mode=644
tags:
- logging
- name: Touch tracking file into existence
command: touch -a {{log_base_dir}}/tracking.log creates={{log_base_dir}}/tracking.log
tags:
- logging
- name: Set permissions on tracking file
file: path={{log_base_dir}}/tracking.log owner=syslog group=adm mode=640
tags:
- logging
- name: Install logrotate configuration for tracking file
template: dest=/etc/logrotate.d/tracking.log src=edx_logrotate_tracking_log.j2 owner=root group=root mode=644
tags:
- logging
...@@ -16,38 +16,52 @@ ...@@ -16,38 +16,52 @@
tags: tags:
- pre_install - pre_install
- name: Create log directory - name: Install role-independent useful system packages
file: path=/mnt/logs state=directory mode=2770 group=adm owner=root # do this before log dir setup; rsyslog package guarantees syslog user present
apt: pkg={{item}} install_recommends=yes state=present update_cache=yes
with_items:
- ack-grep
- lynx-cur
- logrotate
- mosh
- rsyslog
- screen
- tree
tags: tags:
- pre_install - pre_install
- name: Create aliases to the log directory
file: state=link src=/mnt/logs path=$app_base_dir/log - name: Create log directory
file: path=$log_base_dir state=directory mode=2770 group=adm owner=syslog
tags: tags:
- pre_install - pre_install
- name: Touch the edx log file into place
command: touch -a /mnt/logs/edx.log - name: Create alias from app_base_dir to the log_base_dir
file: state=link src=$log_base_dir path=$app_base_dir/log
tags: tags:
- pre_install - pre_install
- name: Set permissions on edx log file - logging
file: path=/mnt/logs/edx.log owner=www-data group=adm mode=775
- name: Create convenience link from log_base_dir to system logs
file: state=link src=/var/log path=$log_base_dir/system
tags: tags:
- pre_install - pre_install
- name: Update apt cache - logging
apt: update_cache=yes
- name: Touch edx log file into place
# This is done for the benefit of the rake commands, which expect it
command: touch -a {{log_base_dir}}/edx.log creates={{log_base_dir}}/edx.log
tags: tags:
- pre_install - pre_install
- logging
- include: create_venv.yml - name: Set permissions on edx log file
# This is done for the benefit of the rake commands, which expect it
- name: Install role-independent useful system packages file: path={{log_base_dir}}/edx.log owner=syslog group=adm mode=640
apt: pkg={{item}} install_recommends=yes state=present
with_items:
- ack-grep
- lynx-cur
- mosh
- most
- screen
- tree
tags: tags:
- pre_install - pre_install
- logging
- include: create_venv.yml
- include: edx_logging_base.yml
- include: software_update.yml
---
- name: edx-update.sh, manual lms/cms update script
copy: src=roles/common/files/edx-update.sh dest=/usr/local/bin/edx-update.sh owner=ubuntu group=adm mode=0775
tags:
- update
{{log_base_dir}}/*/edx.log {
create
compress
copytruncate
delaycompress
dateext
missingok
notifempty
daily
rotate 90
size 1M
}
{{log_base_dir}}/tracking.log {
create
compress
delaycompress
dateext
missingok
notifempty
daily
rotate 365000
size 1M
}
# custom edx syslog configuration
# Put in place and templatized by ansible 
#
# Cliffs notes version: ansible uses local0 and local1, so they have to be
# plumbed through appropriately.
 
#############
# Change some global configuration
#############
# don't escape newlines
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$RepeatedMsgReduction off
$MaxMessageSize 32768
 
#############
# Override default auth config so we can ignore local0 and local1 also
#############
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none,local0.none,local1.none -/var/log/syslog
 
# According to the docs for rsyslog, "syslogtag" is the "TAG" from
# the message which in the case of tracking logs is interpreted to
# be everything before the first whitespace character.
# This is why we include "syslogtag."
# Maybe one day this will be answered:
# - http://stackoverflow.com/questions/10449447/how-to-avoid-syslogtag-from-rsyslog-template
$template tracking,"%syslogtag%%msg%\n"
 
# looks for [service_name=<name>] in the beginning of the log message,
# if it exists the log will go into {{log_base_dir}}/<name>/edx.log, otherwise
# it will go into {{log_base_dir}}/edx.log
$template DynaFile,"{{log_base_dir}}/%syslogtag:R,ERE,1,BLANK:\[service_variant=([a-zA-Z_-]*)\].*--end%/edx.log"
 
local0.* -?DynaFile
local1.* {{log_base_dir}}/tracking.log;tracking
...@@ -25,6 +25,16 @@ ...@@ -25,6 +25,16 @@
- lms - lms
- lms-env - lms-env
- name: Create lms log target directory
file: path={{log_base_dir}}/lms state=directory owner=syslog group=adm mode=2770
tags:
- lms
- lms-env
- logging
# If we set up LMS, we have to set up edx logging
- include: ../../common/tasks/edx_logging_base.yml
# Install ssh keys for ubuntu account to be able to check out from mitx # Install ssh keys for ubuntu account to be able to check out from mitx
# Temprory behavior, not needed after June 1. Perhaps still useful as a recipe. # Temprory behavior, not needed after June 1. Perhaps still useful as a recipe.
# {{ secure_dir }} is relative to the top-level playbooks dir so there is some # {{ secure_dir }} is relative to the top-level playbooks dir so there is some
......
...@@ -24,27 +24,34 @@ lms_env_config: ...@@ -24,27 +24,34 @@ lms_env_config:
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache' 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
'KEY_FUNCTION': 'util.memcache.safe_key' 'KEY_FUNCTION': 'util.memcache.safe_key'
'CERT_QUEUE': 'certificates' 'CERT_QUEUE': 'certificates'
'COURSE_LISTINGS': # 'COURSE_LISTINGS':
'default': ['MITx/6.002x/2012_Fall'] # 'default': ['MITx/6.002x/2012_Fall']
'stage-berkeley': [ 'BerkeleyX/CS169/fa12'] # 'stage-berkeley': [ 'BerkeleyX/CS169/fa12']
'stage-harvard': [ 'HarvardX/CS50/2012H'] # 'stage-harvard': [ 'HarvardX/CS50/2012H']
'stage-mit': [ 'MITx/3.091/MIT_2012_Fall'] # 'stage-mit': [ 'MITx/3.091/MIT_2012_Fall']
'stage-num': [ 'MITx/6.002x-NUM/2012_Fall_NUM'] # 'stage-num': [ 'MITx/6.002x-NUM/2012_Fall_NUM']
'stage-sjsu': [ 'MITx/6.002x-EE98/2012_Fall_SJSU'] # 'stage-sjsu': [ 'MITx/6.002x-EE98/2012_Fall_SJSU']
'LOCAL_LOGLEVEL': 'INFO' 'LOCAL_LOGLEVEL': 'INFO'
'META_UNIVERSITIES': # 'META_UNIVERSITIES':
'UTx': [ 'UTAustinX'] # 'UTx': [ 'UTAustinX']
'MITX_FEATURES': { 'AUTH_USE_OPENID_PROVIDER': true, 'MITX_FEATURES':
'CERTIFICATES_ENABLED': true, 'ENABLE_DISCUSSION_SERVICE': true, 'AUTH_USE_OPENID_PROVIDER': true
'ENABLE_INSTRUCTOR_ANALYTICS': true, 'ENABLE_PEARSON_HACK_TEST': true, 'CERTIFICATES_ENABLED': true
'SUBDOMAIN_BRANDING': true, 'SUBDOMAIN_COURSE_LISTINGS': true} 'ENABLE_DISCUSSION_SERVICE': true
'SUBDOMAIN_BRANDING': { 'stage-berkeley': 'BerkeleyX', 'ENABLE_INSTRUCTOR_ANALYTICS': true
'stage-harvard': 'HarvardX', 'stage-mit': 'MITx', 'ENABLE_PEARSON_HACK_TEST': false
'stage-num': 'MITx', 'stage-sjsu': 'MITx'} 'SUBDOMAIN_BRANDING': false
'VIRTUAL_UNIVERSITIES': [] 'SUBDOMAIN_COURSE_LISTINGS': false
# 'SUBDOMAIN_BRANDING':
# 'stage-berkeley': 'BerkeleyX'
# 'stage-harvard': 'HarvardX'
# 'stage-mit': 'MITx'
# 'stage-num': 'MITx'
# 'stage-sjsu': 'MITx'
# 'VIRTUAL_UNIVERSITIES': []
'WIKI_ENABLED': true 'WIKI_ENABLED': true
lms_source_repo: git@github.com:edx/mitx.git lms_source_repo: git@github.com:edx/edx-platform.git
lms_debian_pkgs: lms_debian_pkgs:
- apparmor-utils - apparmor-utils
- aspell - aspell
......
...@@ -6,26 +6,50 @@ ...@@ -6,26 +6,50 @@
notify: restart nginx notify: restart nginx
tags: tags:
- nginx - nginx
# removing default link
- name: Removing default nginx config (enabled)
file: path=/etc/nginx/sites-enabled/default state=absent
notify: restart nginx
tags:
- nginx
- name: Removing default nginx config (available)
file: path=/etc/nginx/sites-available/default state=absent
tags:
- nginx
# Standard configuration that is common across all roles # Standard configuration that is common across all roles
# Default values for these variables are set in group_vars/all # Default values for these variables are set in group_vars/all
# Note: remove spaces in {{..}}, otherwise you will get a template parsing error. # Note: remove spaces in {{..}}, otherwise you will get a template parsing error.
- include: nginx_site.yml state={{nginx_cfg.sites_enabled.edx_release}} site_name=edx-release - include: nginx_site.yml state={{nginx_cfg.sites_enabled.edx_release}} site_name=edx-release
- include: nginx_site.yml state={{nginx_cfg.sites_enabled.basic_auth}} site_name=basic-auth - include: nginx_site.yml state={{nginx_cfg.sites_enabled.basic_auth}} site_name=basic-auth
# Default htpassword file, required for basic auth
- copy: content={{ nginx_cfg.htpasswd }} dest=/etc/nginx/nginx.htpasswd - name: Write out default htpasswd file
copy: content={{ nginx_cfg.htpasswd }} dest=/etc/nginx/nginx.htpasswd
tags:
- nginx
- name: Create nginx log file location (just in case)
file: path={{log_base_dir}}/nginx state=directory owner=syslog group=adm mode=2770
tags: tags:
- nginx - nginx
- logging
# removing default link
- name: Removing default nginx config and restart (enabled)
file: path=/etc/nginx/sites-enabled/default state=absent
notify: restart nginx
tags:
- nginx
- name: Ensuring that nginx is running - name: Ensuring that nginx is running
service: name=nginx state=started service: name=nginx state=started
tags: tags:
- nginx - nginx
# Note that nginx logs to /var/log until it reads its configuration, so /etc/logrotate.d/nginx is still good
- name: Set up nginx access log rotation
template: dest=/etc/logrotate.d/nginx-access src=edx_logrotate_nginx_access.j2 owner=root group=root mode=644
tags:
- logging
- name: Set up nginx access log rotation
template: dest=/etc/logrotate.d/nginx-error src=edx_logrotate_nginx_error.j2 owner=root group=root mode=644
tags:
- logging
- name: Removing default nginx config (available)
file: path=/etc/nginx/sites-available/default state=absent
tags:
- nginx
...@@ -5,13 +5,13 @@ server { ...@@ -5,13 +5,13 @@ server {
server_name trace-cms.* server_name trace-cms.*
studio.lms-dev.m.edx.org; studio.lms-dev.m.edx.org;
access_log {{log_base_dir}}/nginx/access.log;
error_log {{log_base_dir}}/nginx/error.log error;
#
# Send error response when request host isn't under our control # Send error response when request host isn't under our control
# We will no longer respond to proxy attempts like this with # We will no longer respond to proxy attempts like this with
# anything. # anything.
# curl -i -A '' -x http://www.edx.org:80 --proxy-negotiate -U u:p -u u:p http://chat.sdtz.com # curl -i -A '' -x http://www.edx.org:80 --proxy-negotiate -U u:p -u u:p http://chat.sdtz.com
#
set $reject 'no'; set $reject 'no';
......
# Put in place by ansible
{{log_base_dir}}/nginx/access.log {
create
compress
delaycompress
dateext
missingok
notifempty
daily
rotate 90
size 1M
}
# Put in place by ansible
{{log_base_dir}}/nginx/error.log {
create
compress
delaycompress
dateext
missingok
notifempty
daily
rotate 90
size 1M
}
...@@ -3,7 +3,9 @@ server { ...@@ -3,7 +3,9 @@ server {
listen 80; listen 80;
server_name *.edx.org server_name *.edx.org;
access_log {{log_base_dir}}/nginx/access.log;
error_log {{log_base_dir}}/nginx/error.log error;
# #
# Send error response when request host isn't under our control # Send error response when request host isn't under our control
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment