bookmark_borderWorking w/ GitHub & Visual Studio 2019 – Getting Your Project on GitHub – Part 2

Microsoft makes it simple to add a new project or an existing project to GitHub with Visual Studio 2019 and the GitHub VS2019 extension. In this post we will review; how to create a local Git repository using Visual Studio 2019, how to create a remote repository on GitHub using the GitHub VS2019 extension and briefly cover some basic source control operations.

Creating Our Local Git Repository

After installing the GitHub Extension for Visual Studio start up a new project or an existing project. Once your project has been created and/or is loaded, look over to the lower right-corner of your project and click “Add to Source Control” -> “Git”. Visual Studio 2019 will automatically create a local Git repository for you and place all necessary files in your solution folder. Most of these files are hidden unless you explicitly have your file explorer show hidden files.

Once our local Git repository has been created, the text will change from from “Add to Source Control” to a helpful little toolbar. Let’s run through what is available to us here briefly. From left to right, we have our pending commits, pending changes denoted with a pencil icon (the number here is the number of files that have been changed but not yet commited), our current working Git repository and our current working branch. (Psst, if you are unfamiliar with any of these words, you may refer to the GitHub glossary.)

You may be wondering why you have two pending commits after you just created your Git repository. These commits were created by VS 2019, one for a couple of our Git files and another for our project files. You can view your commit history by expanding the working branch menu in the toolbar shown above and clicking “View History”. You can also click on any individual commit to view more details in the Team Explorer pane such as the commit’s parent, the files changed and comments written by the commit author.

Lastly, I would like to note that you may notice that not all files in your solution folder are a part of your repository. This is intentional as not all files in your folder are needed to build your project. I recommend taking a brief look at the .gitignore file in your solution folder. This file is pre-filled by VS2019. You can edit the .gitignore file if there is additional items you may not want on source control such as files containing sensitive information. It is worth mentioning, that there appears to be a division in the developer community about whether certain files containing sensitive information should be placed in source control (particularly, remote repositories) and invite you to research the question and act upon your particular scenario and needs.

Creating Our Remote Repository on GitHub

We now have a local Git repository however, it’s not yet backed up on GitHub. We need to create a remote repository on GitHub and sync it to our local repository. Click on the Home button in the Team Explorer pane and select “Sync”.

Now click the “Publish to GitHub” button, fill out the necessary information and click “Publish”. Your repository should now be visible on your GitHub account and the two commits created by VS2019 should have been pushed (along with any additional commits you may have made before creating the remote repository).

Future Changes – Pulling and Pushing

We are now set and ready to begin working with our repository. Let’s briefly review how to add a new commit to our local repository and push it out to our remote repository.

Git will keep track of any changes we make to our source code files. In the Solution Explorer pane in Visual Studio 2019, we can see helpful icons next to our files that show us the state of our file. You can hover above these icons to get a descriptive tool tip.

After making some edits to your source code, head back to the Team Explorer pane, click Home and then Changes. You will see the files that have been altered since your last commit. In order to make a commit, you must enter a description describing the change. Once entered, you will have three options 1) Commit All, 2) Commit All and Push, and 3) Commit All and Sync.

Here’s an overview of each of these options:

  • Commit All – A commit will be created locally.
  • Commit All and Push – A commit will be created locally and pushed to your remote repository.
  • Commit All and Sync – A commit will be will be created locally, then it will sync any additional changes from the remote repository to your local repository and finally your commit will be pushed to your remote repository.

If you are the sole contributor of your project, it is okay to use option two. However, if you are working in a team it’s good practice to use option three to keep your local code base up to date since your teammates may have pushed changes. Alternatively, you may choose to create your commit locally and then push or sync manually through the “Sync” option under Home in the Team Explorer pane.

Whether you choose to commit to your remote repository manually or not, the “Sync” area allows you to push local commits to your remote repository and also check for new commits in the remote repository that may not exist in your local repository. The Incoming Commits section will present any commits made to the repository that are not synced with your local code. You can click “Fetch” to check if there is any commits, and “Pull” to actually bring in changes present in the remote repository that do not exist in your local repository. The Outgoing Commits section will contain any commits you created locally and that have not yet been pushed to our remote repository. Click “Push” to push your commits to your remote repository.

If you are working in a team, it is a good idea to start your day by syncing your code. This allows you to have the most up to date code case and allows you to catch bugs or other problems earlier in the process.

Lastly, I would like to mention there is some source control shortcuts available to you in Visual Studio 2019 when you right-click a file. Most importantly, the “Undo” option will remove all pending changes in a file if you feel you have completely totaled it and want to start new.

Going Forward

You should now be able to use source control at it’s most basic level. If you are still curious about source control in Visual Studio 2019, it only takes a couple of minutes to make a test project with a local and remote repository and play with the source control features available to us in Visual Studio 2019.

Once you feel comfortable with pushing and pulling with your remote repository, I strongly advise learning about and making use of branches even if you feel your project does not require them. You can start by creating a development branch. As a new developer, this will give you experience applicable in real word scenarios where businesses may use several branches. You will also get experience in what can sometimes be the messy process of merging branches.

bookmark_borderWorking w/ GitHub & Visual Studio 2019 – Adding VS2019 GitHub Extension – Part 1

This is part one of a small series illustrating how to get started using GitHub with Visual Studio 2019. We will first add the GitHub extension in our IDE and then go over how to start a new project or add an existing project to GitHub and finally look into how we can bring in an existing project on GitHub into our local computer and begin making contributions.

Regardless of the complexity of your project, it is always a great idea to use source control. Interacting with source control software is a daily in the real world and any hands-on familiarity really boosts your credentials when starting a job as a software developer. (Even now, source control is not only used by developers but also artists and writers!)

Source Control vs. Version Control

You might hear these two terms used interchangeably in day to day conversation and wonder if there is any difference between the two. The answer is, not particularly. There is a greater difference between source control and revision control than source control and version control. Version control is a broader term that encompasses both source control and revision control. Source control adds the features of branching and merging to differ from revision control. Whether you refer to it as source control or version control, others will know what you are referring to. If you are interested in reading more about this topic, please check out this StackOverflow thread.

Git vs. GitHub

GitHub is a hosting service for Git repositories and Git is the actual tool that makes version control possible. There is several Git hosting services available such as GitLab, BitBucket, SourceForge, etc so you are not limited to using GitHub. Some of these services also have Visual Studio extensions so you can easily integrate them into VS 2019.

Basic Git features come packaged with VS 2019 but Microsoft does recommend installing the full version of Git on your machine. For this series, we can get by without installing Git. Please refer to the link provided to see cases where you will need a full installation of Git.

Getting Started

In order to use GitHub with your project in Visual Studio 2019 we will need to download and install a Visual Studio extension.

1. Begin by opening up an instance of VS 2019. In the menu, click Extensions and the Manage Extensions.

2. A new window containing a number of extensions available for VS 2019 will open. If the GitHub extension is not among the top listed, search for “GitHub” and click “Download”.

3. A new window will open up and initialize the download process. Once downloaded, you will need to verify that you want to install the extension. Select “Modify” to agree. Your instance of Visual Studio 2019 will likely need to restart.

Upon, restarting Visual Studio 2019 you should now see GitHub under Azure in the Hosted Service Providers section in the Team Explorer pane. Click “Sign Up” if you don’t have a GitHub account or “Connect” to login into an existing GitHub account. Tada! We are ready to begin using GitHub within VS2019.

bookmark_borderMerge Sort in C#

Merge sort is an algorithm used to sort a collection of items using the divide and conquer paradigm. The algorithm was conceived by John von Neumann in 1945.

The algorithm works by breaking down a list into n sublists until each list has a length of one. This is accomplished by recursively calling a mergeSort function whose task is to identify the middle point of a given list or if there is no middle point return the size one list. Once we have reached the end of a particular branch and have two sublists of size one, the algorithm begins to merge the sublists. These merges will bubble up a sorted list. The function call stack below gives a better picture of this “bubbling up” nature.

Here is an implementation of merge sort with C#. It is based off the C code in GeeksForGeeks.

// Divides a given array in half until length one and then merges
static void mergeSort(int[] arr)
{
   if (arr.Length > 1)
   {
      int middlePoint = arr.Length / 2;
      int[] leftArr = new int[middlePoint];
      int[] rightArr = new int[arr.Length - middlePoint];
      for (int i = 0; i < middlePoint; i++)
      {
         leftArr[i] = arr[i];
      }
      for (int i = 0; i < (arr.Length - middlePoint); i++)
      {
         rightArr[i] = arr[middlePoint + i];
      }
      mergeSort(leftArr);
      mergeSort(rightArr);

      merge(arr, leftArr, rightArr);
   }
}

// Merges to arrays in order
static int[] merge(int[] merged, int[] left, int[] right)
{
   int indexLeft = 0, indexRight = 0, indexMerged = 0;

   while (indexLeft < left.Length && indexRight < right.Length)
   {
      if(left[indexLeft] <= right[indexRight])
      {
         merged[indexMerged] = left[indexLeft];
         indexLeft++;
      }
      else
      {
         merged[indexMerged] = right[indexRight];
         indexRight++;
      }
         indexMerged++;
   }

   while(indexLeft < left.Length)
   {
      merged[indexMerged] = left[indexLeft];
      indexLeft++;
      indexMerged++;
   }

   while (indexRight < right.Length)
   {
      merged[indexMerged] = right[indexRight];
      indexRight++;
      indexMerged++;
   }

   return merged;
}

// Driver
static void Main(string[] args)
{
   int[] myArray = { 5, 22, 1, 2, 45 };
   mergeSort(myArray);
   foreach(int item in myArray)
   {
      Console.Write(item + ","); // 1,2,5,22,45,
   }
}

Below is a function call stack to sort the array [5, 22, 1, 2, 45]. Notice that the algorithm keeps halving the list until both sides are of size 1. Once leftArray and rightArray are of length one, we call the merge function. Due to merge sort’s recursive structure we bubble up merging the sublists into each other. I have used the => symbol to signify the value returned by the function.

mergeSort([5, 22, 1, 2, 45])
mergeSort(leftArray = [5, 22])
	mergeSort(leftArray = [5]) 
	mergeSort(rightArray = [22])
	merge(merged = [5, 22], leftArray = [5], rightArray = [22]) => [5, 22]
mergeSort(rightArray = [1, 2, 45])
	mergeSort(leftArray = [1])
	mergeSort(rightArray = [2, 45])
		megerSort(leftArray = [2]) 
		mergeSort(rightArray = [45]) 
		merge(merged = [2, 45], leftArray = [2], rightArray = [45]) => [2, 45]
	merge(merged = [1, 2, 45], leftArray = [1], rightArray = [2, 45]) => [1, 2, 45]
merge(merged = [5, 22, 1, 2, 45], leftArray = [5, 22], rightArray = [1, 2, 45]) => [1, 2, 5, 22, 45]

bookmark_borderJavaScript: Shallow Copy vs Deep Copy

While working through the exercises in Eloquent JavaScript, there was an exercise where we had to implement a prepend function for a list object. It appeared simple but there was a small detail that would completely change the output of my function. Can you spot the difference between the two functions below?

const prependShallow = (element, list) => {
  return { value: element, rest: Object.assign({}, list) };
}

const prependDeep = (element, list) => {
  return { value: element, rest: list };
}

The first function returns a completely new list object with no internal references. The second function also returns a list object but the value in the key rest references the parameter list. Is either answer more correct that the other? I would say it depends on your objective and how you will be using the object. The shallow implementation follows the concept of creating a “pure” function. This has the benefit of not having any side-effects and making testing easier. However, there can be cases where performance is of upmost importance and using a reference instead of creating a copy can save us processing time and memory.

Primitive and Composite/Complex Data Types

Before going into shallow vs deep copying let’s quickly review data types as they provide us with the necessary framework to process the why’s of shallow and deep copying.

A primitive data type is usually built-in to the language and is part of the building blocks of the language. These values are typically stored directly into a computer memory address and are often passed by value. JavaScript has seven primitive types:

  • Boolean
  • null
  • undefined
  • Number
  • BigInt
  • String
  • Symbol

A composite/complex data type consists of a grouping of primitive data types as seen in arrays or objects. These values typically contain a reference (a memory address) to the actual physical location where the grouping begins (i.e. arr[0]). JavaScript has one composite/complex type which is Object. Why do composite data types store a reference instead of a value? Imagine having an array with 10,000 elements, now imagine having to pass that array by value to a function. Passing these values by reference allows us to better use computer space (memory) and time (processor).

It is worth noting that in JavaScript the composite/complex data type Object is the ancestor of most non-primitive entities. The prototype of an array is the Array object whose prototype is Object. The prototype of a function is also Object.

Shallow Copy vs. Deep Copy

The concept of shallow and deep copying only applies to composite/complex data types as these entities are passed by reference.

A deep copy is when two objects, our original object and the copy object, point to the same memory location. This means any change to either object will be reflected in the other. Since they reference the same memory location, they will have the same keys and values.

A shallow copy may contain the same keys and values as the original but it points to its own memory location. It has no tie internally to the object it has copied. Therefore, a change in either object will not be reflected in the other.

Let’s follow through an example. We have a binding dog that will be our original object and two copies a dogDeepCopy and a dogShallowCopy. The image below illustrates how our bindings may look in memory. Note that dog and dogDeepCopy are pointing to the same memory address (rectangle).

let dog = {
  name: "Cookie",
  age: 5
};
let dogDeepCopy = dog;
let dogShallowCopy = Object.assign({}, dog);

JavaScript defines the use of the equality operator, ==, on two objects to test whether two objects are referencing the same memory location. An expression with this operator and two objects will return true if they point to the same memory location and false if they don’t. It does not do a comparison between the keys and values of the objects (more on that later).

console.log("dog == dogDeepCopy -> " + (dog == dogDeepCopy));
console.log("dog == dogShallowCopy -> " + (dog == dogShallowCopy));

/*
'dog == dogDeepCopy -> true'
'dog == dogShallowCopy -> false'
*/

Again since dogDeepCopy points to the same location as dog, any changes to either object will be reflected on the other object. However, since our shallow copy dogShallowCopy is operating on its own own memory block we do not exhibit that behavior. Try working through the statements below before seeing their output for a small exercise!

//Change dogDeepCopy name
dogDeepCopy.name = "Cookie";
console.log("dog.name -> " + dog.name);
console.log("dogDeepCopy.name -> " + dogDeepCopy.name);
console.log("dogShallowCopy.name ->" + dogShallowCopy.name);

//Change dog name
dog.name = "Brownie";
console.log("dog.name -> " + dog.name);
console.log("dogDeepCopy.name -> " + dogDeepCopy.name);
console.log("dogShallowCopy.name ->" + dogShallowCopy.name);

//Change dogShallowCopy name
dogShallowCopy.name = "Ice";
console.log("dog.name -> " + dog.name);
console.log("dogDeepCopy.name -> " + dogDeepCopy.name);
console.log("dogShallowCopy.name ->" + dogShallowCopy.name);

/*
'dog.name -> Cookie'
'dogDeepCopy.name -> Cookie'
'dogShallowCopy.name ->Brownie'
'dog.name -> Brownie'
'dogDeepCopy.name -> Brownie'
'dogShallowCopy.name ->Brownie'
'dog.name -> Brownie'
'dogDeepCopy.name -> Brownie'
'dogShallowCopy.name ->Ice'
*/

Object.create() vs Object.assign()

A newbie mistake I made when first learning this concept with objects was using the Object.create() and Object.assign() functions interchangeably. JavaScript will go look for a property in it’s prototype (and so forth) if it does not directly find it in it’s own direct properties. I initially believed I had created a shallow copy with Object.create() however, a closer inspection showed I had no direct properties and my “copy’s” prototype contained a reference to my original object. This meant changing the original object reflected the change on my “copy’s” prototype which led me to become aware of my mistake. (A true shallow copy would have not exhibited this behavior.)

Object.create() is to be used when you want to create a new object and have it’s prototype be an existing object. Object.assign() is used to copy the properties of a source object into a target object.

The code below is erroneous. It uses Object.create() to try and create a shallow copy but we can see that the original object is copied into the copied object’s prototype. Note that we can still access the property name in dogCopy even though it is part of it’s prototype and not a direct property.

let dog = {
  name: 'Brownie',
  age: 5
};
let dogCopy = Object.create(dog);

console.log("dog.name -> " + dog.name);
console.log("dogCopy.name -> " + dogCopy.name);

// Change dog name, notice the error: we didn't want dogCopy to change name
dog.name = 'Ice';
console.log("dog.name -> " + dog.name);
console.log("dogCopy.name -> " + dogCopy.name);

// See object's direct properties
console.log("dog keys -> " + Object.keys(dog));
console.log("dogCopy keys -> " + Object.keys(dogCopy));
console.log("dogCopy prototype property 'name' -> " + dogCopy.__proto__.name);

/*
'dog.name -> Brownie'
'dogCopy.name -> Brownie'
'dog.name -> Ice'
'dogCopy.name -> Ice'
'dog keys -> name,age'
'dogCopy keys -> '
'dogCopy prototype property 'name' -> Ice'
*/

Arrays

The concept of shallow and deep copying is also relevant to arrays. Recall arrays are also of type Object in JavaScript. You can create a shallow copy of an existing array with the spread operator: ‘…’. The example below shows the same properties described above with arrays.

let arr = [1, 2, 3];
let arrDeepCopy = arr;
let arrShallowCopy = [...arr];

console.log("arr == arrDeepCopy -> " + (arr == arrDeepCopy));
console.log("arr == arrShallowCopy -> " + (arr == arrShallowCopy));

console.log("Push '4' to arr");
arr.push(4);
console.log("arr -> " + arr);
console.log("arrDeepCopy -> " + arrDeepCopy);
console.log("arrShallowCopy -> " + arrShallowCopy);

console.log("Push '5' to arr");
arrShallowCopy.push(5);
console.log("arr -> " + arr);
console.log("arrDeepCopy -> " + arrDeepCopy);
console.log("arrShallowCopy -> " + arrShallowCopy);

/*
'arr == arrDeepCopy -> true'
'arr == arrShallowCopy -> false'
'Push '4' to arr'
'arr -> 1,2,3,4'
'arrDeepCopy -> 1,2,3,4'
'arrShallowCopy -> 1,2,3'
'Push '5' to arr'
'arr -> 1,2,3,4'
'arrDeepCopy -> 1,2,3,4'
'arrShallowCopy -> 1,2,3,5'
*/

Shallow/Deep Copy vs Shallow/Deep Comparison

The concepts of shallow copy and deep copy is to be separated from the concept of shallow and deep comparison. These types of comparison check whether the contents of two objects are the same, that is they contain the same keys and the same values. A deep comparison will delve deeper by following through a reference until it reaches a value. A shallow comparison will not delve deeper if it encounters a reference. JavaScript does not have the built-in functionality to do these types of comparisons but you can write your own function or use an existing library. Remember the built-in functionality of using the equality operator, ==, with two objects is to check if the objects refer to the same memory location.

Last Words

If you are interested in reading through the book, Eloquent JavaScript, yourself it is available for free. Much gratitude to the author Marijn Haverbeke for creating this resource. The book is also available in print for those that prefer to handle a physical book (me).

bookmark_borderNotes to Self for Future Debian Installation

It hurt my heart to see my old PC gathering dust in my parent’s closet so I decided it would be good idea to turn it into a Linux workstation because my main PC has become clunky after installing new software for work. I know very little about Linux but I enjoy a challenge and tinkering with things. Plus, running a VM with Oracle VirtualBox gets old once you actually really start working. I know I could have started with a more user friendly OS like Ubuntu but I wanted to play around with an OS I would more likely encounter in a professional space. I’m also lazy about updating software so Debian sounded right.

I read the first few sentences off the main Debian installation page and decided to try the net install. I cleared my 4GB USB from 2010 and used the portable version of Rufus to make my USB bootable.

I opened my old PC’s UEFI and made my USB the main boot device and began the installation process. It was all going well until we reached the network part. Debian could detect my devices but couldn’t make use of them since the firmware was not installed (brcm/bcm43xx-0.fw and rtl_nic/rtl8168e-2.fw). It turns out, Debian does not ship with support for non-free firm/software and I was doing a net install so I couldn’t continue with the installation until I got my internet connection setup.

After scouring around for another spare USB, I downloaded the firmware off Debian’s site, enabled the command line tools on the Mac laptop and extracted them into the USB. I later found out those last two items were probably unnecessary since we have access to a terminal during our installation. After a while of searching, I found this beautiful post perfectly outlining all the necessary steps to get Debian to find the firmware for my network adapter.

After setting up the firmware the rest of the installation was almost pain free except I received a “Debootstrap error Failed to determine the codename for the release” error after having the wizard partition my hard drive. After following the instructions from this StackOverflow thread I was able to continue with the “Install the base system” step and finish my installation.

After the installation completed, I attempted to boot into my new system. However, now I needed firmware for my graphic card (Error encountered read: ” Radeon kernel modesetting requires firmware”) so I was only able to use the tttyl terminal. Since I had provided the firmware for my network adapter during installation, I thought I could get the missing firmware with apt-get however, none of the network things that had been configured during installation were there! So there I was again, with no internet connection and the firmware I had loaded in previously was gone. I stepped through the process again mounting the same USB with the firmware files into a temporary folder in lib/firmware and moving the necessary files into lib/firmware. The missing firmware error cleared. However, I was still unable to connect to the internet.

The remaining process was actually quite backwards. My internet search led to a variety of different possible causes but as a newbie, I had trouble discerning what even applied to my case. Even though my network adapter was now being recognized, I needed to set it up as a network. After setting up the network, I was still unable to connect to the internet. I came across this article that said I need to provide my WPA password and network name. The question had previously popped into my mind: how can I log into a network that is locked behind a password? But ignored my gut feeling, believing I would get an error that would say specify the network name. After setting up my network, entering the username and password into /etc/network/interfaces and rebooting my system, I was able to connect to the internet! My routes table was no longer empty and a default gateway had been automatically entered by the system. There was no need to edit anything in my resolvd.config table as many of the threads recommended.

Finally, I could go back and continue with the package I wanted to install through apt-get. The package included firmware for my graphics card so I could use the GUI. After making some small changes to /sources.list to allow non-free package installation, I was able to install the firmware-amd-linux package. After another reboot, I was finally booted into my chosen DE and could also see the Wicd Network Manager.

In retrospect, the experience brought me to appreciate what more user friendly OS’s have done to allow computers and the internet become more accessible.

Lastly, this one is a given, but I kind of dived into this without reading any documentation. I believe the experience would have been smoother would I have at least skimmed through the Debian installation guide. In other words, the “errors” I encountered would have been expected as they are documented.

bookmark_borderFCC: Markdown Previewer

I completed my first project off of FCC’s Front-End Libraries Certification! Below is my CodePen for the Markdown Previewer.

The partially resizable windows were inspired by their featured sample and the look and feel by an old Windows OS.

See the Pen FCC: Markdown Previewer – MS Inspired by Ari (@yecantu) on CodePen.

There was a few issues that I stumbled upon while working on this mini project:

  1. I needed to learn how to nest React Components for components who don’t know who their future children are. This was the case for the window objects.
  2. The scrollbar on the editor text area is not perfect. I wanted each window to have a scrollbar appear when the content overflows. However, text area already has it’s own scrollbar. My initial method of approaching this was with a parent CSS selector but I discovered they don’t exist! It took me a while to notice that it’s display property is inline by default. After specifying the editor’s display property to block, I was able to cover the scrollbar from it’s parent div. However, it does not appear to be properly aligned and I cannot apply padding to the text area.
  3. Markdown itself.

bookmark_border4 Ways to Secure Your Linux Server

Every minute, every few seconds, your server is receiving a number of malicious connections; from an IP address in Moldava checking to see if a far out port is open, to someone in Iran trying to login into your server with a random username. It’s a bit unsettling but rest assured these connections are likely stemming from autonomous crawlers scanning the web and you are not being personally targeted.

The attacks seem to take on two different forms; 1) The port is accessed via SSH protocol (i.e. accessible through the use of specialized software or browser extensions), or 2) The port can be accessed via HTTP (i.e. accessible via a fresh browser install). Depending on your setup, you will likely spot attempts of type one in your system authentication log files. Attempts of type two should be visible on any firewall or web server logs you may have setup.

The rest of this post is written with a Debian based OS in mind, so commands may be slightly different depending on your distro. Below are a few basic items to begin securing your Linux server.

  1. SSH Keys
  2. Uncomplicated Firewall (UFW)
  3. Apache Web Server
  4. Fail2Ban

SSH Keys

You don’t need to use SSH keys to log into your server but it would make it a lot more secure than having a generic root account with a password.

SSH keys utilize public-key cryptography where a public key is used for encryption and a private key is used for decryption. Your public key can be shared openly without compromising your server’s security. However, you will need to make sure your private key is never disclosed.

Your private key is a file that you will use every time you need to make a connection to your server. This means in order to connect to your server the individual must physically (or would it be digitally) have this file. This communication occurs over port 22 and also known as Secure Shell (SSH).

This document over at Digital Ocean is extremely informative on how SSH works, how to setup SSH for your server and even walks you through disabling root login.

If you happen to have multiple servers, you will need to consider trade-offs between security and convenience. The issue is explained beautifully by tylerl and YaOzl at Stack Overflow and I highly recommend reading through the thread. In summary, you must make a choice between using the same key for all of your servers or generating different keys for each server (potentially, inconvenient). Moreover, since it is recommend you add a passphrase to your key-pair, you have the option of using the same passphrase or using a different one for each key-pair.

Last but definitely not least, make sure to back up your private keys in a secure location and produce an additional external backup in case of an emergency. Your private key is only as secure as you make it.

Uncomplicated Firewall (UFW)

While you could directly use iptables to manage your server’s firewall, a simpler alternative is to use Uncomplicated Firewall (UFW). UFW is a front-end to iptables that is easier to learn.

UFW is disabled by default so you will want to enable it as soon as possible. You will also want to make your first rule so that you can SSH back into your system. It is a good idea to set this rule to LIMIT to prevent brute force attacks on your server.

sudo ufw limit 22/tcp
sudo ufw enable
sudo ufw logging on 

If your home IP happens to be static (unlikely, but worth a look), you could limit your rule even further by specifying your home IP.

If you are using your server to serve a website you will need to write additional rules to allow connections to port 80. If your site makes use of SSL certificates you will also need to open port 443. Since you want to make your site available to everyone make sure to use ALLOW instead of LIMIT here.

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

If you are using multiple servers, you can be write even more specific rules for your firewall (i.e. having your web server’s port 80 only listen to your load balancer’s IP).

You can view your current UFW configuration with the command below.

sudo ufw status verbose

Apache Web Server

If you have installed Apache web server, it might be a good idea to turn off some of the default settings. When Apache runs into a problem it displays an error page that shows a little too much information for our visitors including our OS and web server version. To remove this information, head over to your Apache folder and into your conf-available folder. Locate the file security.conf and look for the following blocks to change your settings.

cat conf-available/security.conf | grep -in "ServerTokens"
cat conf-available/security.conf | grep -in "ServerSignature"

You will want to turn off your ServerSignature and change your ServerTokens value to what is most appropriate for you.

A second setting we may want to change is the directory listing that is enabled by default in the absence of an index.html file. This option can be overridden by a virtual host file so if the configuration below appears to have no effect consider looking into your virtual host configurations.

Lastly, if your website accepts uploads it is a good idea to limit your request size. By default, the request size is unlimited which can cause issues with your site or perhaps be abused in a malicious way.

Head over to your main Apache configuration file, apache2.conf and locate the directory tags near the bottom of the file.

<Directory /var/www/>
        Options -Indexes #Remove directory listing, note the -
        AllowOverride None
        Require all granted
        LimitRequestBody 512000 #Set request size in bytes
</Directory>

Make sure to restart your Apache web server to save these changes.

sudo systemctl restart apache2

Fail2ban

Consider installing Fail2ban to prevent brute force attacks. This will allow you to ban malicious IP addresses for a variable amount of time. The application comes with an SQLITE database so you can preserve long-term bans over server resets.

sudo apt-get install fail2ban

Fail2ban is ready to use as soon as you install it but I would recommend increasing the ban time and double checking the default settings correspond with your setup.

After installing Fail2ban, create a copy of the jail.conf file and name it jail.local. (Fail2ban is configured to read the settings off your .local file). Next, locate the default bantime variable and set it to something higher. This variable will be towards the top under default settings. You can configure different ban times for different Fail2ban “jails”. If you would like to permanently ban these IP’s enter a value of -1.

cd /etc/fail2ban
cp jail.conf jail.local
cat jail.local | grep -in "bantime"

Now, restart Fail2ban to save your changes.

sudo fail2ban-client reload

If you would like to further configure Fail2ban, I recommend this article which provides more background information and instructions.

Remember to routinely monitor your system for malicious activity. This is probably best done through specialized software or probably writing your own scripts!

bookmark_borderWordPress via Digital Ocean: Increase Upload Size

If you attempted to upload photos through the WordPress admin console you were probably disappointed to find the upload size defaulted to 2MB.

However, if your site is hosted through Digital Ocean or you have access to your own server you can quickly change this.

We can make the changes to PHP’s configuration file however, it will apply to all of your PHP sites. Alternatively, we can make the changes to an .htaccess or Apache virtual host configuration for more control.

Regardless of the approach, we are interested in three configuration directives: 1) post_max_size and 2) upload_max_filesize and 3) memory_limit. The default values are 8MB, 2MB and 128 MB respectively. You should not need to alter the memory_limit directive unless your post_max_size is close to or greater than memory_limit. Second, your post_max_size should be larger than your upload_max_filesize in order to successfully upload your files.

Now let’s log into our web servers.

php.ini

Make your way over to the PHP installation folder at /etc/php/. Your path may be slightly different depending on your PHP version. Drill down to the version folder you are using and into the apache2 folder (i.e. /etc/php/7.2/apache2). Here we will find the file php.ini.

Use the following commands to locate the line numbers for these directives in php.ini.

more php.ini | grep -n "post_max_size"
more php.ini | grep -n "upload_max_filesize"
more php.ini | grep -n "memory_limit"

Now, use your favorite editor to update these directives and save your changes. Please note that php.ini uses only “M” to denote “MB”. Restart the Apache server to apply your changes.

.htaccess or Apache virtual host configuration

If you are setting your configuration through an .htaccess file change your working directory to your WordPress folder and use your favorite editor to edit your .htaccess file.

If you prefer to do your configuration through an Apache virtual host, head over to for your virtual host’s configuration file ( /etc/apache2/sites-available).

Add the following lines with your specific values, save your new configuration and restart the Apache server to apply your changes.

php_value post_max_size 12M
php_value upload_max_filesize 3M
#php_value memory_limit 128M

The max upload size in your WordPress admin console should now show your new upload limit.

If you are interested in hosting your own VM with Digital Ocean, please consider using my referral link so we can both earn Digital Ocean credit. Please visit this link to learn more about Digital Ocean’s referral program.

bookmark_borderWordPress: 404 After Changing Permalinks

Whew! This took me way longer than it should have. Long story short, I was missing the rewrite module on Apache, my Apache configuration file needed an update and my Apache virtual host setup was messed up! I’m not positive at what point I broke my virtual host setup but I knew I had made a mistake when I ran the command below and saw a backup file under my virtual hosts. Sadly, I only came across the command below after a few hours of researching the issue. Let’s blame it on the fact I had not had a morsel of food for seven hours.

apache2ctl -S

First we must verify that the rewrite module in Apache is enabled. If the module is not enabled we will not be able to run the directives under our WordPress site’s .htaccess file. If the module is already enabled on your server, you will receive a message stating so. It will also be visible under /etc/apache2/mods-enabled as rewrite.load. Run the commands below to enable the module and restart Apache.

sudo a2enmod rewrite
sudo service apache2 restart

After verifying the rewrite module is enabled, check to see if there is still a 404 error on your pages. If there isn’t, great! Else, we continue, when you change the permalink format through your WordPress admin console it inserts rewrite rules to the hidden file .htaccess in your WordPress directory. .htaccess files contain overrides or additional information about your site configuration. Versions 2.3.9+ of Apache, by default do not allow .htaccess files to override the configuration specified under /etc/apache2/apache2.conf. Apache’s main configuration file also includes any configurations you may have written for any enabled virtual hosts. Below is a snippet from the default Apache configuration file.


# Sets the default security model of the Apache2 HTTPD server. It does
# not allow access to the root filesystem outside of /usr/share and /var/www.
# The former is used by web applications packaged in Debian,
# the latter may be used for local directories served by the web server. If
# your system is serving content from a sub-directory in /srv you must allow
# access here, or in any related virtual host.
...
..
<Directory /var/www/>
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

The AllowOverride None directive above is stating that for directory /var/www/ do not allow the server configuration to be overwritten through the use of .htaccess files. There are two reasons for this logic 1) performance gain and 2) security. WordPress’s default behavior is to edit the .htaccess file when you make changes to the permalink format so you may choose to keep .htaccess but keep in mind Apache does not recommend the use .htaccess if you have access to the main configuration file.

The AllowOverride directive accepts three different values; 1) None, 2) All, and 3) Directive Name. If you choose to keep using the .htaccess file, the third option allows for better security than simply entering All by allowing us to enter which directives can be overwritten. For the purposes of this particular issue we will be using the value FileInfo so we may make use of the rewrite module but visit the official Apache documentation for more information on the directive names available and how to use them.

Below is the code you want present in your Apache configuration file so WordPress’s .htaccess may be able to rewrite your URL’s and serve your pages.

       <Directory [YOUR_DIRECTORY_PATH]>
                Options Indexes FollowSymLinks
                AllowOverride FileInfo #Alternatively, use AllowOverride All
                Require all granted
        </Directory>

If you would like to remove the .htaccess file and instead include your rewrite logic in your configuration file you will need to include the rewrite directives within the Directory tags as shown below.

       <Directory [YOUR_DIRECTORY_PATH]>
                Options Indexes FollowSymLinks
                AllowOverride FileInfo #Alternatively, use AllowOverride All
                Require all granted
                RewriteEngine On
                RewriteBase /
                RewriteRule ^index\.php$ - [L]
                RewriteCond %{REQUEST_FILENAME} !-f
                RewriteCond %{REQUEST_FILENAME} !-d
                RewriteRule . /index.php [L]
        </Directory>

After editing your configuration files make sure to restart your Apache server to make our changes active. By this point, the 404 error on your WordPress pages should be gone.

If it is not and you are making use of virtual hosts, I urge you to verify that your Apache virtual host configuration is correct. I spent quite a while with this issue because I had a larger underlying issue with my virtual host configuration. I was able to identify the issue by not resorting to making changes to apache2.conf when I felt positive the changes on my virtual host configuration file were supposed to work. My particular issue was a file I had renamed as a backup was still linked as a virtual host and was occupying the same server name as the new file I thought was the active virtual host. Both my backup and newer file had the same server name specified. I removed my backup from the server entirely and it worked. My brain was toasted.

bookmark_borderHaProxy: Cannot Bind Socket

As a total noob in system administration, it took me a while to realize that my machine’s services might have been stopped/restarted after Digital Ocean reached out about performing an emergency droplet migration after detecting problems on the physical server where my load balancing droplet/VM was running.

This was confirmed by running the following command and noting that the last time HaProxy had been active was on Digital Ocean’s maintenance day. (Please note that I am running these commands on Ubuntu.)


systemctl status haproxy.service


With this newfound knowledge, I attempted to restart HaProxy as sudo but ran into an error. After checking the HaProxy log (located in /var/log/), I found the following error message.


"Jul 15 17:10:13 ab1 haproxy[32723]: [ALERT] 202/171013 (32723) : Starting []: cannot bind socket [000:000.000.000:80]:"


Something was already occupying/listening on port 80! After using the the netstat command and feeding its output to grep we identified the culprit…Apache! Of course…my droplet was restarted and the apache2 service is probably set to start on boot.

sudo netstat -ltnp | grep -i ":80"


Our site was up and running after stopping the Apache service and starting HaProxy.

sudo service apache2 stop
sudo service haproxy start


I was able to confirm the issue arises on booting by checking our boot log with the following command.


sudo journalctl -b


Jul 15 16:34:11 ab1 systemd[1]: Started The Apache HTTP Server.
Jul 15 16:34:11 ab1 haproxy[978]: [ALERT] 195/163411 (978) : Starting frontend http: cannot bind socket [000.000.000.000:80]
Jul 15 16:34:11 ab1 haproxy[978]: Proxy www-https started.
Jul 15 16:34:11 ab1 haproxy[978]: Proxy www-https started.
Jul 15 16:34:11 ab1 haproxy[978]: Proxy app-backend started.
Jul 15 16:34:11 ab1 haproxy[978]: Proxy app-backend started.
Jul 15 16:34:11 ab1 systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jul 15 16:34:11 ab1 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jul 15 16:34:11 ab1 systemd[1]: Failed to start HAProxy Load Balancer.


Now if we want to prevent this from happening in the future, we have to set which services we want to auto start on boot which will be the topic of an upcoming post.


Hasta la vista baby!