Skip to main content

Lab 1: Initial System Setup

Overview

In this lab, we will be setting up some tools that you will be using throughout this class. To complete this lab, you should do all of the steps below as instructed. If you are unable to complete all the steps today, you must complete them outside of the lab time.

To receive credit for this lab, you MUST show your work to the TA during the lab, and push it to the github before the deadline. Please note that submissions will be due right before your respective lab sessions in the following week. For Example, If your lab is on this Friday 10 AM, the submission deadline will be next Friday 10 AM. There is a "NO LATE SUBMISSIONS" policy for labs.

Learning Objectives

LO1. Get comfortable with installing software on your computer.
L02. Set up git and GitHub
L03. Learn to set up Docker containers
LO4. Understand the configuration settings in a Docker-compose.yaml file
LO5. Learn to run the Docker containers

Part A

Quiz on Canvas - Only for this week, the lab quiz will be due before the next lab. For future labs, the quizzes will be due before your lab time or otherwise stated.

Part B

PLEASE READ CAREFULLY

All instructions in this lab must be run on your local machine. Please DO NOT use coding.csel.io or any other cloud environment.

Setup Github

tip

For students working on a Windows machine, if you do not already have WSL installed on your computer, please follow these instructions to install it. You can then follow Linux based instructions, except for Docker installation, in your WSL terminal for all lab activities.

To work with your repository in Github you either need a SSH key or a personal access token.

Connecting to GitHub with SSH

You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. Refer this link to connect to Github with SSH.

Install Git

caution

This section is os-dependent, so only complete one of the panels!

Run the following command to install Git

sudo apt install git

Check if Git is installed properly

git --version

Set your git name to match your real name

git config --global user.name "John Doe"
caution

Don't forget to replace "John Doe" with your name.

Verify your git name was updated

git config --global user.name

Set your git email to match your github account

Note that this email should be the email that is associated with your GitHub account. You can check by visiting your Github Settings.

git config --global user.email "myUserName@gmail.com"

Verify your git email was updated

git config --global user.email

Clone your GitHub repository

info
You need to accept the invite to the GitHub classroom assignment to get a repository.

Github Classroom Assignment
For the next two steps, make sure to edit the name of the repository after the copying the command into your terminal.
git clone git@github.com:CU-CSCI3308-Fall2024/lab-1-initial-set-up-<YOUR_USER_NAME>.git

Navigate to the repository on your system.

cd lab-1-initial-set-up-<YOUR_USER_NAME>

Create a new file (to test your connection)

vi test.txt
  1. Switch to insert mode (press i) to add text
  2. Add any text you wish (This is a test!!!)
  3. Exit insert mode by pressing ESC.
  4. To save & quit, type :wq (write & quit) and hit enter

Add & Commit your changes to the repo

git add .
git commit -m "Commit message goes here"
git push
🎥Recording output
For this section, you are not required to take any screenshots.

Install Docker

caution

This section is os-dependent, so only complete one of the panels!

Preface

These instructions are a fast-track install guide, and we won't go through all of the details and options you have. We also assume that you've never installed docker on your system, so we won't cover cleaning up old installations. If you'd like more information, please refer to the setup guide here.

Install dependencies and add the key.

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Check that you have the key.

sudo apt-key fingerprint 0EBFCD88

Setup the stable repository

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Install docker engine

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Install docker compose

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version

Optional: Post-installation steps

Docker always runs as root, which means you will have to use sudo whenever running docker. Alternatively, you can create a user group to ditch the use of sudo. There are pros and cons of both methods (mostly security vs. convenience), but you don't need to decide on this right now and The documentation discusses this in far greater detail here. For now, just remember to prepend all of the docker commands such as docker run with sudo.

🎥Recording output
For this section, you are not required to take any screenshots.

Introduction to Docker

Running the Hello World Container

Open the terminal and run the following command

docker run --name hello_world_container hello-world

Output:

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

You'll likely see information about the image being pulled if it's not cached locally. This is normal, and will happen every time you run a docker image for the first time. If all goes well the above should be visible at the end of the output.

Listing containers

Run the following command in terminal to see the containers on your system

docker ps -a

Output:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7164fe3c9664 hello-world "/hello" 3 minutes ago Exited (0) 3 minutes ago hello_world_container

Note that docker ps by default only lists the running containers (a container is an instance of a docker image). We used the flag -a to lists all containers.

Removing stopped containers

i. Removing old containers is a vital part of using docker. One way to this is by removing stopped containers such as the one reported above.

docker rm hello_world_container

ii. Checking our containers again, and we can see that it has indeed been removed.

docker ps -a

Output:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Removing containers automatically on exit

We can also use the flag --rm to remove the container automatically once it finishes execution.

docker run --rm hello-world

You should see the same output as before. However, if you check for containers using docker ps -a, you'll notice it has already been removed. Wonderful!

Running an interactive container

You might've noticed that the hello-world image mentioned an "ubuntu" image. What gives, that's an operating system? Let's give it a go.

docker run -it ubuntu bash

There are some new parts to this one, so let's break it down. The flag -it tells docker you'd like to run the image interactively, and that you'd like a tty shell (it's two flags really, but you'll almost always use them together). The bash indicates that we'd like to use bash as our command.

Your command prompt should have changed a bit. Notice that the user and host has been changed. We'll be using this difference in prompt to indicate when the command is being executed in docker. Note that the host name is not likely to match on your system. That is because the host here is the generated container ID.

You can exit the container as follows:

exit

Don't forget to clean up the container you just created! Remember, you can avoid this by using the flag --rm.

Running a detached container

As well as being able to run containers interactively, we can run them in detached mode.

docker run --name ubuntu_container -dt ubuntu bash

Output:

ace17ba3269cab8cf250c57841ab216f14963715ff646607be90af17a41a6ef

We are now detached from the container but it's still running. We can check using:

docker ps

Output:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ace17ba3269c ubuntu "bash" 2 minutes ago Up 2 minutes ubuntu_container

We'll see why detached containers are useful soon, but for now we'd like to stop this one. Since it is running, our previous methods won't work.

docker rm ubuntu_container

Output:

Error response from daemon: You cannot remove a running container ace17ba3269cab8cf250c57841ab216f14963715ff646607be90af17a41a6ef6. Stop the container before attempting removal or force remove

As docker informs us, we must first stop the container, or we can force remove. Let's force remove the container.

docker rm -f ubuntu_container

Wrapping up

Did you leave any containers? Make sure to double check! You've got the basics of docker down, and feel free to come back to this section as you need. As a tip, if you forget a command or flags,docker's built in help is very useful.

For top level info:

docker --help

Or for command specific information:

docker COMMAND --help

Introduction to Docker Compose

Working with multiple containers

When working with multiple containers, it's often easier to use an orchestration tool, such as Docker Compose, to manage the interdependencies of our services. Throughout our labs this semester, we will be working primarily with 2 containers:

  1. PostgreSQL database, and
  2. Node.js

In this section, we'll look at running a simple app with docker compose.

If you haven't already done so, navigate to your repository on your local system. You should see a file called docker-compose.yaml.

Inspect the configuration outline

Open up the file using an editor of your choice (e.g., Vi, Vim, VS Code, or notepad).

vi docker-compose.yaml

You should see the file below. Currently, the only value present is for version. We'll be filling it out together in this section.

docker-compose.yaml
version: '3.9'services: web: image: working_dir: depends_on: - ports: - volumes: - command: db: image: env_file: expose: - volumes: -volumes:

In each of the following steps, read the instructions carefully to figure out what to put in each field. You'll be able to verify your solution once it is complete.

Some high level concepts first. A "service" is a container, as defined by a group of fields. At the top level, we have services and volumes. These are both objects, which means their children's names must be unique (e.g., web and db).

Each service has its own configuration, but there is nothing special about the names web and db in this example (all fields are supported in each).

tip

This syntax is called YAML, and it is very sensitive to indentation! Please refer to the example above if you are unsure of indentation in the following steps.

Service 1: A Node.js server (web)

  1. The web service (as named by the config) will be a container running the lts version of the node image. The syntax is image:<version>, so in this example you should have:
image: node:lts
  1. For working_directory, we are defining where the container should execute. This should be a directory within the container, for example, /repository. You may choose any path, but we suggest /repository so you can follow along with the example.
working_dir: /repository
  1. Next, we specify the dependencies of the web service. Since we could have multiple, depends_on is a list. In this example, we'll add a dependency on the db (database) service. This indicates that docker should start the database first, otherwise our website would have no data.
depends_on:
- db
  1. By default, docker runs containers in a fully isolated environment. This means we need to explicitly specify any ports or volumes that should be accessible from the host machine. In order to access the website, we'll set up a port binding - port 3000 on the host machine to port 4000 on the container. All bindings reflect the syntax <VALUE_ON_HOST>:<VALUE_IN_CONTAINER>. So, for ports we would add '3000:4000' to the ports list.
ports:
- '3000:4000'
tip

Usually, we will use the same port for simplicity and portability. This example uses different ports to demonstrate which side is the host and which is in the container.

  1. Next, we will bind the current working directory, ./, to /repository, which is the location in the container we specified as our working directory previously. You can do this using the same syntax as with the port example, but add it to the volumes list.
volumes:
- ./:/repository
  1. Last but not least, we need to specify the command to run on startup. Remember, working_dir is where the container will begin execution, and also where our code will be. The package.json file defines a command start, which we can execute by adding npm start as the command.
command: 'npm start'

Inspect the Node.js program

Before moving on, try inspecting the index.js file with the cat command:

cat index.js

Output:

const http = require('http');

const hostname = '0.0.0.0';
const port = 4000;

const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World\n');
});

server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});

Starting Docker Compose

Now that we've written the first half of our docker compose file now would be a good time to verify you're on the right track. You'll need to comment out a few parts of the docker-compose.yaml which you haven't completed yet, so it should look something like this.

docker-compose.yaml
version: '3.9'services: web: image: working_dir: # depends_on: # - ports: - volumes: - command: # db: # image: # env_file: # expose: # - # volumes: # -# volumes:

Now you can go ahead and start docker compose.

docker compose up -d

Output:

Creating network "lab1-initial-setup_default" with the default driver
Creating lab1-initial-setup_web_1 ... done

There are a few new things going on in this command. We are using docker compose instead of docker. This is the base command for all of our interactions with docker in this setup. This supports all of the commands we've learned so far, such as run, exec, ps, and rm. Just as before, the -d flag indicates that we want to run docker in the background.

As an example, try running the following command in terminal.

docker compose ps

Output:

Name Command State Ports
------------------------------------------------------------------------------------------
lab1-initial-setup_web_1 docker-entrypoint.sh npm start Up 0.0.0.0:3000->3000/tcp
FYI

You may see instructions online use docker-compose instead of docker compose. Just know that docker compose is simply the newer version, and these are (mostly) the same.

Checking our Node.js website worked

If you recall, we mapped port 3000 on our host machine to port 4000 from the docker container. Now check that it works by browsing to http://127.0.0.1:3000 in your browser. Notice the print statement? From the perspective of our server it is running on port 4000 (on the docker container), but we're accessing it on port 3000 (through our host)!

🎥Recording output

Take a screenshot of your browser window showing the address bar and the words Hello World. You should rename the file to intro_to_docker_compose_web.png and move it to the submission folder. You will submit this file in a later step.

Shutting things down

Now that we've learned how to start docker compose, let's cover how to shutdown the running containers so we can move on to the database. As long as your still in the same directory, the command is as follows.

docker compose down

Output:

Stopping lab1-initial-setup_web_1 ... done
Removing lab1-initial-setup_web_1 ... done
Removing network lab1-initial-setup_default

Now you can uncomment the sections you commented out in Starting Docker Compose.

Service 2: PostgreSQL Database (db)

In this course we will be use Postgres for our database.

  1. The db service should use version 14 of the postgres image.
image: postgres:14
  1. Next, define an environment file .env using the env_file key. This file contains environment variables to setup postgres. For this lab it is checked into your repo, which is technically OK because it only includes credentials for local users. However, this is bad practice as it makes it easier to accidentally commit credentials for remote users. Regardless of if the repository is private now, you would not want some random person in the future accessing your database over the internet, so you should not commit this file to GitHub.
env_file: .env
  1. Since this is a database, we don't need (or want) to access the database directly from the host system. Instead, we "expose" the default postgres port (5432) internally, so other services (e.g., our website) can access it. We accomplish this by adding the port '5432' to the expose list.
expose:
- '5432'
  1. We also need to allocate a volume for the database. However, instead of binding the local filesystem, we can have docker manage the volume for us. We accomplish this by using the syntax <VOLUME_NAME>:<PATH_IN_CONTAINER>. For this volume, let's use the volume name lab-01-db and path in container /var/lib/postgresql/data. A docker-managed volume will always resemble a name whereas a bind-mount will resemble a path.
volumes:
- lab-01-db:/var/lib/postgresql/data
caution

Please note that this difference is important for databases, which tend to store data in protected files. These can be an annoyance if bind-mounted to your filesystem (esp. on Windows), and in most cases shouldn't be accessed directly from the host anyway.

  1. Lastly, we'll need to add the volume name we used above to volumes object at the bottom of the file.
volumes:
lab-01-db:

Unlike services, which are stateless, volumes will persist throughout startups. In most cases, this is what we want, but if you want to get rid of a hanging volume, e.g. to test your database init, run docker compose rm -v. Note that this will remove ALL of your data, so be extra sure you've made a stable backup somewhere if the data is important.

Starting both services

Now that you've completed both services, perform the same commands as in Starting Docker Compose.

docker compose up -d

Output:

Creating network "lab1-initial-setup_default" with the default driver
Creating volume "lab1-initial-setup_lab-01-db" with default driver
Creating lab1-initial-setup_db_1 ... done
Creating lab1-initial-setup_web_1 ... done

This time, you'll notice there are now 2 additional lines in the output above. Our command now created a volume lab1-initial-setup_lab-01-db and we can also see it started the db service (lab1-initial-setup_db_1) before the web service, nice!

Let's do the same thing here with docker compose ps.

docker compose ps

Output:

Name Command State Ports
------------------------------------------------------------------------------------------
lab1-initial-setup_db_1 docker-entrypoint.sh postgres Up 5432/tcp
lab1-initial-setup_web_1 docker-entrypoint.sh npm start Up 0.0.0.0:4000->3000/tcp

Here too, we can note the difference between the port binding with the web service vs. the exposed port with the db service.

Working with PostgreSQL

When you ran docker compose up, you may have noticed that the database container was also running. Currently the database is empty, but we can interact with database by using docker compose exec. During the installation process, PostgreSQL created a new user account named postgres. You'll be using this user account for most of your database interactions.

docker compose exec db psql -U postgres

Note that since we're using docker compose, we can refer to each container by it's name instead of using the container id.

Create a practice database

CREATE DATABASE practicedb;

Enter the practice database

\c practicedb;

Create a practice table

Copy the following code into psql terminal

CREATE TABLE IF NOT EXISTS store (
id SERIAL,
product_name VARCHAR(40) NOT NULL,
qty INTEGER NOT NULL,
price FLOAT NOT NULL,
PRIMARY KEY (id)
);

INSERT INTO store
(product_name, qty, price)
VALUES
('apple', 10, 1),
('pear', 5, 2),
('banana', 10, 1.5),
('lemon', 100, 0.1),
('orange', 50, 0.2);
tip

While all-caps is not required in SQL, it is considered a best practice, and can help keep things readable.

Test your table

SELECT * FROM store;

Output:

id | product_name | qty | price
---+--------------+-----+-------
1 | apple | 10 | 1
2 | pear | 5 | 2
3 | banana | 10 | 1.5
4 | lemon | 100 | 0.1
5 | orange | 50 | 0.2
🎥Recording output

For this section, you will export queries run in Create a practice database to Test your table, and outputs of Create a practice table and Test your table into intro_to_docker_compose_db.txt file, and along with the completed docker-compose.yaml file, copy that file to the submission folder. Label each entry as per the manual here. You will submit this file in a later step.

Exit postgres

\q

Exit docker and cleanup the containers.

Please be aware this will remove the associated volume, so make sure you've recorded your output for the previous steps.

docker compose down -v

And that's it! Keep in mind this is in no way a comprehensive list of all the things you can do with docker compose, that'll come in a couple weeks.

Part C

There is no homework for this lab 🎉

Submission Guidelines

Commit and upload your changes

1. Make sure you copy your files for submission into your local git directory for this lab, within a "submission" folder that you need to create.
2. Run the following commands at the root of your local git directory, i.e., outside of the "submission" folder.

git add .
git commit -m "add all screenshots and changes for lab 1"
git push

Once you have run the commands given above, please navigate to your GitHub remote repository (on the browser) and check if all the files have been updated/uploaded. If you see the files/changes there, you have successfully submitted your work for this lab.

You will be graded on the files that were present before the deadline. If the files are missing/not updated by the deadline, you could receive a grade as low as 0. This will not be replaced as any new pushes to the repository after the deadline will be considered as a late submission and we do not accept late submissions. Please do not submit a regrade request for this.

Regrade Requests

Please use this link to raise a regrade request if you think you didn't receive a correct grade.

Rubric

DescriptionPoints
Part A: Lab QuizComplete Lab Quiz on Canvas20
Part B: Docker-compose.yamlA completed docker-compose.yaml file20
Part B: NodeJSScreenshot shows the browser's address bar and website content.20
Part B: PostgreSQLFile with the queries and outputs to insert and retrieve the contents of the table. Docker-compose.yaml file with all the configuration set up.20
Part C: N/AThere is no homework for this lab0
In-class check-inYou showed your work to the TA or CM20
Total100

Please refer to the lab readme or individual sections for information on file names.