By DevOps on Sat 04 August 2018
inI have been looking into the best way to deploy Elixir in the cloud. As part of that, I have been building various AMIs with only the minimum needed to run an Elixir app.
Nerves is a framework for building embedded systems in Elixir. Instead of running a general purpose operating system, it does as much as possible in Elixir. It boots to an init process which starts an Erlang VM, which is then responsible for starting the system.
The traditional Linux boot process has a lot of legacy cruft: shell scripts calling C programs which call kernel APIs to configure the network. With Nerves, we do this in Elixir, using helper programs where necessary.
When Elixir is in charge, we need some way to combine system-level code with the application code on the same VM, handling system updates. That is handled by the Shoehorn library.
This post shows how you can run a Nerves application on EC2.
I created a Nerves "system" for EC2, nerves_system_ec2. It is based on nerves_system_x86_64, adding the drivers needed for EC2 to the kernel and configuring the boot process for the EC2 environment.
I created nerves_init_ec2 to bring up the system, similar to nerves_init_gadget. AWS provides EC2 instance metadata to the running system, accessed via HTTP calls to a special IP address.
nerves_init_ec2
uses this information at runtime to configure the instance.
The most important part is configuring the ssh console to use the SSH
key pair
to access the system remotely.
Following are instructions for how to get a Nerves app running on EC2. I created a simple Nerves app which does all this, hello_nerves_ec2.
I am building the system via an instance in EC2. The build server runs Ubuntu 18.04 on a t2.xlarge instance to have more resources for building the custom nerves system. The build generates a lot of files, so I added a 100GB gp2 EBS volume mounted under my home directory.
To deploy, I write the Nerves firmware image to the disk, then take a snapshot
of the volume, turn it into an AMI and launch an instance with it. I attach a
1GB EBS volume to it for the Nerves system under /dev/xvdn
.
sudo apt install build-essential automake autoconf git squashfs-tools ssh-askpass
sudo apt install libssl-dev libncurses5-dev bc m4 unzip cmake python xsltproc
sudo apt install libmnl-dev
wget https://github.com/fhunleth/fwup/releases/download/v1.2.3/fwup_1.2.3_amd64.deb
sudo dpkg -i fwup_1.2.3_amd64.deb
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.5.1
echo -e '\n. $HOME/.asdf/asdf.sh' >> ~/.bashrc
echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> ~/.bashrc
asdf plugin-add erlang
asdf plugin-add elixir
asdf install erlang 21.0
asdf install elixir 1.6.6
asdf global erlang 21.0
asdf global elixir 1.6.6
mix local.hex
mix local.rebar
mix archive.install hex nerves_bootstrap
Check out nerves_system_ec2:
git clone https://github.com/cogini/nerves_system_ec2
mix nerves.new hello_nerves_ec2
cd hello_nerves_ec2
In mix.exs
, reference the new nerves system:
defp system("ec2"), do: [{:nerves_system_ec2, path: "../nerves_system_ec2", runtime: false, nerves: [compile: true]}]
Add nerves_init_ec2 to mix.exs
deps:
defp deps(target) do
[
{:nerves_runtime, "~> 0.4"},
{:nerves_init_ec2, github: "cogini/nerves_init_ec2"},
] ++ system(target)
end
In config/config.exs
, add nerves_init_ec2
to the list of applications
loaded by Shoehorn:
config :shoehorn,
init: [:nerves_runtime, :nerves_init_ec2],
app: Mix.Project.config()[:app]
Configure nerves_init_ec2
if you like. The defaults will bring up a system
with an IEx console accessible via ssh on port 22.
export MIX_TARGET=ec2
mix deps.get
mix firmware
Burn the firmware to the EBS volume mounted on the build server. Be careful about the device name. In my life I have messed up my system by overwriting my build server disks with firmware files more times than I would like to admit...
mix firmware.burn -d /dev/xvdn
nerves_runtime will initialize the root partition on startup, but we may only get one boot out of an AMI. Create the filesystem in the build environment:
sudo mkfs.ext4 /dev/xvdn4
At this point, the new Nerves system is all set up on the EBS volume. Now we need to launch an instance from it. We do that using the AWS API, so it can run from anywhere. I normally run it from my dev machine, but you can do it from the build server as well.
In order to talk to the API, we need permissions. When you create an AWS account, you get a "root" account with full permissions, but you should not use it for for everyday operations. You should create an admin user for yourself and a role for your app to run under which gives it access to specific resources.
Go to IAM in the AWS console.
Create a group called Admins
and attach policy AdministratorAccess
, giving members full access.
Create a user for yourself, e.g. cogini-jake
. Under "Access type," check
"Programmatic access" and "AWS Management Console access." Set your login password.
Click "Next: Permissions" and then "Add user to group", selecting the Admins
group.
Record the "Access key id" and "Secret access key" now, this is your only chance.
On your local dev machine, set up an AWS profile in ~/.aws/credentials
with the keys:
[nerves-dev]
aws_access_key_id = XXX
aws_secret_access_key = YYY
Most AWS client tools will automatically look up the access keys using the profile, so you can control keys on a per-project basis by setting the profile in the environment.
export AWS_PROFILE=nerves-dev
Install the AWS Command Line Interface:
pip install awscli
Create an ssh key pair. Run create-key-pair.sh
bin/create-key-pair.sh nerves
Copy the output to ~/.ssh/nerves.pem
and chmod 0600 nerves.pem
.
Create an AWS security group (like a firewall) which allows access to the ports on the instance from the Internet. create-security-group.sh opens port 22 for the IEx console and port 80 for HTTP.
bin/create-security-group.sh nerves
launch-instance-from-volume.sh takes a snapshot of the volume, builds an AMI, then launches an EC2 instance with it.
Edit the script to match your details:
# Name of security group
SECURITY_GROUP=nerves
# Name of instance to create
NAME=hello_nerves_ec2
KEYPAIR=nerves
# Tag instance with owner so admins can clean up stray instances
TAG_OWNER=jake
Run the script, specifying your volume:
bin/launch-instance-from-volume.sh vol-abc123
The script will print the IP of the new instance, or you can get it from the AWS console.
ssh -i ~/.ssh/nerves.pem 123.45.67.89
To exit the SSH session, type ~.
You can view the console output in the AWS Management Console EC2 Dashboard with "Actions | Instance Settings | Get System Log". The graphical instance screenshot is available immediately, but the text log takes a few minutes to appear.
Following is the process I used to create nerves_system_ec2. I basically followed the Nerves documentation for customizing the system.
git clone https://github.com/nerves-project/nerves_system_x86_64 nerves_system_ec2
cd nerves_system_ec2/
git remote rename origin upstream
git remote add origin git@github.com:cogini/nerves_system_ec2.git
git push origin master
mix nerves.system.shell
make menuconfig
make savedefconfig
make linux-menuconfig
make linux-update-defconfig
I used same kernel config as for my minimal EC2 system with Buildroot.
The kernel options are the same as nerves_system_x86_64, with a few additions.
Since we can't manually respond to a panic, we just reboot.
set cloud_opts=panic=1 boot.panic_on_fail
Configure hardware options
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html#timeout-nvme-ebs-volumes
set hardware_opts=nvme.io_timeout=4294967295
Set up a serial console, allowing output to be captured in text with "Actions |
Instance Settings | Get System Log" or the AWS CLI command aws ec2 get-console-output
.
set console_opts=console=tty1 console=ttyS0
This is the resulting kernel command
linux (hd0,msdos2)/boot/bzImage root=PARTUUID=04030201-02 rootwait $console_opts $cloud_opts $hardware_opts
Use the serial console:
-c ttyS0
-s "/usr/bin/nbtty"
Reduce the size of the user filesystem to match the 1GB volume. In the cloud, we should not be storing data on the instance, everything should be in S3 or a database.
define(APP_PART_COUNT, 1013248)