Dockerizing Your Django App


When I’m at work I frequently encounter situations that could be quite messy. I was working on developing a supporting tool recently, that contained a Python Django-based app and another jar file that had to be invoked and sends the output to the Django app for the web dashboard to display the .jar file output.

It was really cumbersome. By then I had no idea about docker, my best bet was to:

  • Configure host system to run the jar file smoothly
  • Run the Django application inside a virtual environment

Both on a virtual machine. Setting the Django web application was the easy part, the difficult part was the Java environment. JDK vs JRE paths, versions, dependencies, … it was hell.

After that, I had an idea: since I have two almost-independent environments that need to run, and I need both to be kind of close to one another, why not learn Docker?

Ok, let me get something straight at first: this article doesn’t intend to teach you Docker nor Python nor Django. However, it should act as a reference of how to quickly “Dockerize” your application that is Django-based.


Ok, so let’s get to the commands and see how Dockerizing your django app could be achieved.

Creating a test Django app

Let’s create our sample Django application:

If you don’t have Django installed:

$ pip install django

This could be pip or pip3, depending on your own set up and if you’re using the default Ubuntu-shipped pip or not. You can run:

$ which pip

to know which pip is being invoked.

Creating the app

We will create a sample project called theproject.

$ django-admin startproject theproject
# Let's go inside the directory and run it to test
$ cd theproject/
$ python runserver
Amazing! Sample app is running on port 8000.

Now that the app is running fine, let’s “dockerize” it. To dockerize the app, we will need to create a Dockerfile to guide Docker on how to build the image that we need for this application to run. This includes dependencies, what port to expose to the outside world so that we can access the app from outside Docker, and many more settings that we will not tackle here (volumes, for instance.)


FROM python:3
RUN mkdir /django_on_docker/
COPY requirements.txt /django_on_docker/

WORKDIR /django_on_docker

RUN pip install -r requirements.txt

COPY . /django_on_docker


CMD ["ls"]
CMD ["python", "", "runserver", ""]

What we wrote basically was importing the default Python image, create a directory in the container, copy requirements.txt if we have one, to mention our dependencies (in our case, it’s “Django==2.2.6”)

If we’re not sure, we can use pip freeze > requirements.txt command to generate a txt file of the current installed packages.

Creating the image

$ docker build -t testdjango .

You might have some sudo problems if you’re on Ubuntu. The previous command builds an image and calls it “testdjango” from the Dockerfile in the path “.” (current directory)


Sending build context to Docker daemon  20.48kB
Step 1/11 : FROM python:3
 ---> 02d2bb146b3b
 ---> Using cache
 ---> f189027eb76f
Step 3/11 : RUN mkdir /django_on_docker
 ---> Using cache
 ---> 00cbac3a8583
Step 4/11 : COPY requirements.txt /django_on_docker/
 ---> f18487ddaee8
Step 5/11 : WORKDIR /django_on_docker
 ---> Running in f17fe8db4a7e
Removing intermediate container f17fe8db4a7e
 ---> 9d260da6f98f
Step 6/11 : RUN pip install -r requirements.txt
 ---> Running in f00e52ca07b8
Collecting Django==2.2.6 (from -r requirements.txt (line 1))
  Downloading (7.5MB)
Collecting sqlparse (from Django==2.2.6->-r requirements.txt (line 1))
Collecting pytz (from Django==2.2.6->-r requirements.txt (line 1))
  Downloading (509kB)
Installing collected packages: sqlparse, pytz, Django
Successfully installed Django-2.2.6 pytz-2019.3 sqlparse-0.3.0
Removing intermediate container f00e52ca07b8
 ---> 313f0daa51db
Step 7/11 : COPY . /django_on_docker
 ---> 94c34b7e0786
Step 8/11 : WORKDIR /django_on_docker/
 ---> Running in eadfe9421c59
Removing intermediate container eadfe9421c59
 ---> b8767b36c53f
Step 9/11 : EXPOSE 8000
 ---> Running in 5917cb5c3789
Removing intermediate container 5917cb5c3789
 ---> 8137bb6fdfe6
Step 10/11 : CMD ["ls"]
 ---> Running in 83d1a91ce005
Removing intermediate container 83d1a91ce005
 ---> ec5558b57984
Step 11/11 : CMD ["python", "", "runserver", ""]
 ---> Running in 4ce8bea1af13
Removing intermediate container 4ce8bea1af13
 ---> fb67dd3ed80f
Successfully built fb67dd3ed80f
Successfully tagged testdjango:latest

As we can see, Docker built the image and installed Django for us, and also run the commands that we mentioned and it should now be up and running!

The container

So, now we can create a container based on the image that we had built.

$ docker run -p 8000:8000 testdjango

Of course there are many options we’re omitting: volumes, naming the container, user namespace, running in interactive mode and more. Just for the sake of getting things up and running fast!

I might look into writing about that in a following series!

Quick testing

Now we can go to our localhost via the browser, on port 8000 and we find that it’s up and running! If things go smooth, you’re probably happy that Dockerizing your Django app didn’t take that long!

Priority Queues part III: Implementation and Algorithms

We learned in the last post about possible ways of implementing a priority queue data type. In this part, we’re going to talk about representing a heap internally. We create a private array pq[] of length N+1. Our array will be 1-indexed.

Implementation and algorithms


As we have seen in the previous parts, for a max-oriented heap to be valid, a condition must hold. That is, the root node, which is the element at index k, is bigger than or equal to both its children, at 2k and 2k+1.

There are cases when this property is temporarily violated and should be corrected:

  • When a new node is inserted at the end (of the pq[] array) or if a value of the key becomes larger than its parent. (Parent must be larger)
  • When the root (max, in case of a max-heap) is removed.
  • When a node’s value becomes smaller than one of its children (the node is no more the max of the sub-heap)

Bottom-up reheapifying (swim)

If a violation happens and a node’s key becomes larger than its parent, we can fix this by exchanging the node with its parent.

If the node is still larger, which is the same case as the image above, we can fix it the exact same way we did: we exchange it again with its parent.

We keep going up the heap until we find that the key is not larger than the one in the parent or if we reached the root then we stop the swimming.

private void swim(int k) {
  while (k > 1 && less(k/2, k)) {
    exch(k/2, k);
    k = k/2;

Top-down reheapifying (sink)

This is the second violation that we might have: a node is smaller than one or both of its children. We can do a slightly different approach than the previous violation fix, but similar in concept: we will exchange that node with the larger of its children. If there’s still a violation at the child, we can keep doing the same thing: pushing the node down the heap by exchanging with the larger of children, until we either reach a case where both children are smaller so the node would be indeed the max or we hit the bottom of the heap (end of the pq[]).

Let’s take a look at the implementation:

private void sink(int k) {
  while (2*k <= N) {
    int j = 2*k;
    if (j < N && less(j, j+1)) j++;
    if (!less(k, j)) break;
    exch(k, j);
    k = j;

Complete implementation of a Heap priority queue

public class MaxPQ<Key extends Comparable<Key>> {
  private Key[] pq; // heap-ordered complete binary tree
  private int N = 0; // in pq[1..N] with pq[0] unused
  public MaxPQ(int maxN) { 
    pq = (Key[]) new Comparable[maxN+1]; 
  public boolean isEmpty() { return N == 0; }
  public int size() { return N; }
  public void insert(Key v) {
    pq[++N] = v;
  public Key delMax() {
    Key max = pq[1]; // Retrieve max key from top.
    exch(1, N--); // Exchange with last item.
    pq[N+1] = null; // Avoid loitering.
    sink(1); // Restore heap property.
    return max;

  private boolean less(int i, int j)
  private void exch(int i, int j)
  private void swim(int k)
  private void sink(int k)

Code is courtesy of Algorithms, 4th Edition by Robert Sedgewick and Kevin Wayne. Chapter 2, page 318.


Priority Queues Part II: Heap Implementation in Java


In the previous part, we talked about the theory and motivation for priority queues. In this part, we dive a little bit into the implementation. We will see possible implementations of a priority queue, especially heap implementation, and in the next part of the series we will have a look at the Java implementation.


A priority queue is a data type that supports these operations:

  • insert: insert a new key into the queue
  • deleteMinimum (or maximum, we’ll get to that): pop and return the minimum (maximum) key from the queue.
  • isEmpty: is the collection empty?

These 3 operations would form our main API for the data type. We can go about implementing such a data type in number of ways and we’ll have different implementations of insert and removing max operations depending on how we will structure our priority queue:

  1. Array-based (unordered): our “lazy” approach
    1. insert:similar to stack’s push, we insert an element at the end of the unordered array, increase the pointer to the end of the array, N. This gives us an O(1) insert.
    2. removing maxwe linearly search for the max, as in selection sort, and when we find the max, we exchange it with the element at the end of the array and we decrement the pointer to the end of the array (we fake-remove it.) We now have O(n) deleteMaximum
  2. Array-based (ordered):
    1. insert:Like in insertion sort, find the correct index to insert the key, shift the remaining elements of the array. We have now an O(n)insertion.
    2. removing maxthe array is already sorted; we remove the last element by decreasing the pointer to the array and returning they key. We get O(1) deleteMaximum.

All elementary implementations leaves us in a position where we have a trade-off between the performances of insert and deleteMaximum operations. But we can get to a third implementation that could guarantee much better performance than O(n) on both operations.

Priority Queues Heap Implementation

Heap implementation of a priority queue gives us a fast insert and a fast deleteMaximum. Let’s dig deeper into priority queues heap implementation. A heap is a data structure in which keys are stored such that each key has value that is bigger than or equal to other two keys at other specific positions.

A heap is a balanced heap-ordered binary tree

If we thought of a heap as a binary tree, each root contains a key that is bigger than the keys in its children. Equivalently, the key in each node is smaller than or equal to the key in that node’s parent (if it has). As we move up from any node, we get a nondecreasing sequence of keys; as we move down, we get a nonincreasing sequence of keys. We can move up and down the tree (or heap) in logarithmic time.

The height of a heap of size N is log N

Heap representation

In the previous paragraphs, we learned that it’s most convenient to represent a heap as an array. Since a heap is a complete binary tree, we have a good opportunity to use a compact array representation as we don’t need explicit links.

We can see that each node at position has its parent at position k/2. Similarly, its children are at positions 2k and 2k+1.

This allows us to traverse the heap as we move up from a[k]by continuously going to the nodes parent, k/2. Or move down to 2k and 2k+1

Next part

In the next part, we will take a look at the algorithms that are performed on the heaps and we will be solving a coding challenge that we will try to use priority queues in.


  • Algorithms, 4th ed.
  • Wikipedia articles on Huffman Coding, Dijkstra’s algorithm, Priority Queue.

Priority Queues Part I: What Are They?

Imagine that you have a big log of financial transactions and you want to extract the largest 5 transactions that happened. We might have a few ways to approach this.

1. Sort them? Duh.

If we have 3 million transactions, let’s say, on a span of month. We will have to do the following:

  • Store all the 3 million transactions in a data structure (array?)
  • Sort the array
  • Return the last 5 elements (if it’s a nondescending ordering)

But we have a fundamental challenge with this approach:

  • Too many elements: we will use space equivalent to the number of elements or more, depending on the sorting algorithm

2. Compare each of the 3 million transactions with an array of 5 largest transactions seen so far.

  • You might want to consider being more empathetic towards the computer resources if you see that this is a good approach.
  • The time complexity of this approach would be O(N*M) where N is the number of elements, M is the amount of transactions we want to extract. It’s cumbersome if M is large.

3. Priority queues

We can process the transactions in chunks. This way we wouldn’t need to store all the 3 million transactions. We can create a collection of size 5, keep adding transactions to this collection, then when we reach its full capacity of 5, we remove the minimum. If we keep doing this, we can guarantee that we end up with that collection of size 5 with the largest 5.

Priority queue is a data type that has two fundamental operations: insert and remove the maximum.

So, what is a priority queue?

As we mentioned in the last paragraph in the previous section, a priority queue is a data type that can make our lives easier if we want to, for example, find a number of maximum or minimum values in a stream of data. Let’s see the applications.


There are situations where using a priority queue fits just perfectly. Here are some:

But how is a priority queue formed?

In the next part, we will take a look at the implementation of priority queues.


  • Algorithms, 4th ed.
  • Wikipedia articles on Huffman Coding, Dijkstra’s algorithm, Priority Queue

Maximum Binary Tree — LeetCode #654

Hello. Thought I’d make it a habit by writing (hopefully) weekly on some coding challenge I work on in my free time. So, welcome to the second article :). Today’s challenge is not actually that much different than the previous one (well, it is different, but not that different). Let’s take a look at Maximum Binary Tree challenge on LeetCode.

The problem is on LeetCode and of difficulty: medium.

Problem Statement

The problem is simple: finding the maximum binary tree that can be formed by the given array. I’ll explain more…

You’re given an array, for example, [3,2,1,6,0,5]. The Maximum Binary tree is defined in the problem statement as:

the tree with the root being the maximum value in the array

Well, this means that obviously our maximum element in the given array currently is 6. So, let’s make this our binary tree root.

After picking 6, the problem states that:

The left subtree is the maximum tree constructed from left part subarray divided by the maximum number.

The right subtree is the maximum tree constructed from right part subarray divided by the maximum number.

I believe this screamed recursion at my face!

The Algorithm

This basically means that, after picking the max element, 6, we’ll split the array into: [3, 2, 1] (6 was here) and [0, 5]

We will repeat what we did. For (sub)array [3, 2, 1], we’ll:

  • pick the max: 3, make it the (sub)tree node
  • split the array into [], [2, 1]
  • pick the max: 2
  • split the array into: [], [1]
  • pick the max: 1
  • no more work.

Our tree now looks like this:

Why are there nulls?

Remember that in recursive algorithms, we need a stopping condition (base case). Otherwise, our recursion will run forever until stack overflows!

Alright, back to the problem. We walked through the recursive algorithm and we sank into the left subtree. If we backtrack and do the same for the remaining [0, 5] part of the array, the tree should look like this:

Nulls removed for readability.


If the current maximum element of a subarray is always in the middle of that subarray, we would have a balanced tree, giving O(log n)since we would have n levels.

In every construction, we would loop on array[start… end] which is a linear, O(n) algorithm. Which gives us, in best condition, O(n log n)

Yet, in average and worst cases, the array isn’t in a specific order, so we might end up with an unbalanced tree which would give us the same time as a linked list: O(n) * O(n) (for calculating max) = O(n²).

Implementation in Java

I’ll leave the explanation in the code comments.

Note: Of course there are more ways to approach this problem, but this is the one I thought easiest to follow.


Keys and Rooms — LeetCode Coding Challenge

Algorithmic challenge - a boy holding head in pain seeing directed, curved line segments moving around.

Hello, world. I have been brushing up on algorithms and data structures and a big part of what I do is working on solving some LeetCode and HackerRank challenges. I wanted to stay in the ready-for-interview mode for sometime. Who knows when an opportunity may come!

I encountered this problem: Keys and Rooms. Despite not solving many challenges related to graphs before, I felt that this challenge was a good way to start hacking into solving programming challenges that are based on graphs and their algorithms.

Problem Statement

Firstly, let’s look at the description. Which is straight forward: there are N rooms (named 0, 1, … N-1). Each room contains keys for other number of rooms. For example, room 0 could contain keys for rooms 1, 3.

If we could start going into one of the rooms, let’s say room 0, how do you think we make sure that all the rooms are reachable?

For example:

Input: [[1],[2],[3],[]]

Keys in room 0: {1}
Keys in room 1: {2}
Keys in room 2: {3}
Keys in room 3: {} (no keys)

Output: true

We go to room #0, we find a key to room #1. We go to room #1, we find a key to room #2. We go to room #2, we find a key to room #3. We go to room #3, we find no keys.

Eventually, we find out that the output is true because we could visit each and every room at least once.

How I solved it

It was really simple. Depth-first search. Think of the rooms as nodes in a graph, and the only way to move from one node (room) to another is an edge (key).

DFS (iterative version) pseudocode:

  procedure DFS-iterative(G,v):
let S be a stack
while S is not empty
v = S.pop()
if v is not labeled as discovered:
label v as discovered
for all edges from v to w in G.adjacentEdges(v) do

We model the rooms as nodes, the keys as edges to the rooms designated by the value of the key. For instance, in the example: [[1,3],[3,0,1],[2],[0]] we build a graph of [0, N-1] nodes. That is, rooms: [0, 1, 2, 3]. For each node (room), we create an edge connecting the current room with the room referenced by the value of the key.


As we can see, room 2 contains a key only to itself and there’s no other room that contains a key to room 2. Hence, output should be false since there is at least one room we cannot visit.

The coding part

To sum up, I find it was a great experience to write about solving a challenge. Not just “solving a challenge”, but rather, the actual writing about the solution. Researching the algorithm, brushing up on Java for implementation and on algorithms to ensure the correctness of the expalnation, it really adds a lot.

Finally, as I first said, I believe it makes a person learn more about the topic they are writing about when they actually start to write.

I will try to make this a habit and probably keep solving interesting challenges and write about them, explaining the approach and the algorithm behind solving.