Applications AWS EC2 Elastic Search Linux Security Security/Vulnerability SSH

SSH Tunneling

We have been in situations where we have a server or a service like RDS (Database), Elastic Search, etc. that are in a private network like a VPC (AWS) or a VN (Azure).

Putting them as public to access them from the local machine is not secure and it is considered as a bad practice.

How to do it?

SSH tunneling is a way to connect to services behind a bastion server without ever exposing those servers to the public. With SSH tunneling, you enjoy the local access without leaving the comfort and security of your SSH connection.

To do it, enter the following command – here we are assuming you are accessing Elastic search in AWS

ssh -N -L -i key.pem username@bastion_server_ip

Now if your key has a passphrase, you’ll be asked to enter it now and once that is done, hit enter and the terminal will run the tunnel (You will not see the next prompt and closing that terminal will stop the tunnel)

Accessing your ES

Now to access the Kibana of your Elastic search, open your browser and enter the following URL


You will see a SSL error, so click on advanced option and click on I understand the risk and continue.

That’s it.

AWS Control Tower Management

AWS Control Tower

This is where AWS control tower and all its related solutions come together with centrally managed policies, governance and standards with options for account specific or consolidated billing.


Let us consider this scenario. You are a cloud architect/Devops infra person at your organization. You have customers telling you the infrastructure that they need (Or at least their basic requirements) and you start building it.

Next thing you know, the customer is running the entire production workload in your account. From a managerial and accounting standpoint, that is going to be really difficult. You will have your own workloads, for which your company has to pay and you have your customer’s workload for which the customer has to pay.

Or there could be another situation where you have multiple teams working on different projects and they all need their own AWS environments with which they can test out their application. This means giving those teams total control over the AWS environments and they are new to it and cannot be trusted to keep the security standards.

This is where AWS control tower and all its related solutions come together with centrally managed policies, governance and standards with options for account specific or consolidated billing.

What is AWS Control Tower?

If you’re an organization with multiple AWS accounts and teams, cloud setup and governance can be complex and time consuming, slowing down the very innovation you’re trying to speed up. AWS Control Tower provides the easiest way to set up and govern a new, secure, multi-account AWS environment based on best practices established through AWS’ experience working with thousands of enterprises as they move to the cloud.

AWS Control Tower gives a single space to streamline AWS account setup, infrastructure setup, governance, standardisation, policies, permissions and account access especially when a single AWS does not cut it for the organization and it’s team to function and innovate at the speed the world demands.

AWS Organization, Guardrail, AWS Config, AWS SSO and all their complementary services come together unde AWS Control Tower.

Why should it be used?

AWS Control Tower brings a host of features under the fingertip of the people incharge of the infrastructure standards and auditing. This ensures that the policies and configuration rules setup by the root AWS account of the organisation will be applied to all the child accounts in the organization.

Thus, any person in the organization who does not even have a proper experience in AWS resource management can set up services with the right standards and best practices. This reduces the overhead for the infrastructure team in audits, maintenance and governance.

AWS Control Tower also gives proper auditing dashboards which can be used to check for discrepancies in the child accounts where security or any other set policies have not been met.

It also gives a single login to all the child accounts and hence the permission can be centrally managed from the parent account for each user.

How does it work?

AWS Control tower uses the same policy and permission system used by IAM, AWS config etc but extends it to the child accounts. This ensures that any rules defined by the parent account cannot be overidden by the child account.

Thus any user who is a part of the SSO of the parent account can be given just the required precise permissions to the child account(s). This kind of a centralised management makes the governance over a multiple accounts which might belong to the internal teams or external customers very easy.

How to get started?

1. Enable STS

Enabled STS in all the Regions. If this is not done, the landing zone creation will fail.

2. Go to AWS Control Tower console

Search AWS Control Tower and open the console

3. Start landing zone setup

Click the button to start the landing zone setup

4. Fill in required details

To start the setup, AWS required two core accounts for logging and audit. Some resources are created in these accounts and the future custom accounts which allows the parent account to manage the child accounts.

For logging and audit, two emails are required to be used as the root IAM account of these AWS accounts. The email IDs should not be used as a root account in any of the AWS accounts.

5. Wait for the setup to complete

The landing zone setup takes around 1 hour to finish. Once that is done, further configurations can be done to set up the policies and rules.

That’s it, AWS Control Tower setup is complete.

What does it cost?

AWS Control Tower as is, is a solution and does not cost anything. There is a 5$ cost per month that comes, but it is from the resources like AWS config, Guardrail, Lambda functions etc. that are set up during the creation of the Landing zone for the communication and management of the child accounts.


AWS Control Tower is a powerful tool for a modern organization which is growing at a very fast pace. It streamlines all the processes, setup, deployments, account creations, permission management etc. This reduces the management overhead and ensures that the organization can spend more time pushing forward instead of spiralling down the management rabbit hole.

Applications AWS EC2 Firewall Linux NethServer OpenVPN Roundcube Security/Vulnerability

NethServer on AWS

If you ever wanted to setup a VPN server, mail server, mail box, web server, or nextcloud (an open source google drive alternative), then you can go through this blog and set it up in minutes using NethServer.

NethServer new management web UI at 9090 port

Since AWS does not provide a NethServer AMI by default via MarketPlace or Community AMI, I had to make a VM image in my local server, modify it, import it to AWS and then modify it again and make it working for AWS.

I have made that image public and it is available in the Mumbai region. If you need it in any other region, contact me or you can make an AMI after launching the server in the Mumbai region and transfer it to whichever region you like.

So to setup a NethServer in AWS, below are the steps you have to follow

Setting up EC2

The instance

Go to the AWS EC2 console and click on Launch Instance

There go to Community AMIs and search for NethServer or you can search the below AMI ID. Do note that this AMI ID might change if I update the AMI


Security group rules

The security group should allow the following as per requirement for accessing the server

For the latest management web UI, open the port 9090 inbound

For old management web UI, open port 980 inbound

For SSH Access, open port 22 inbound

You can keep outbound access as full open for all traffic.

User Access

The NethServer AMI by default comes with a user. The details are as follows

Username: admin
Password: Nethserver@123

Domain/Other details

By default, this AMI comes with the FQDN

This can be changed and there are also my contact details in the Company name space so you can change it according to your requirements.

Also note that you have to change the FQDN before you install LDAP in account providers (Users and Groups).

You can also use Active directory (external) for user and group management.

Also you need to setup LDAP to change the password of the default admin user

Known caveats

Currently there is an issue with Nethserver where it requrires a green interface at any cost, without which it will throw and error at ipconf step.

To fix this, create a network interface in AWS and attach it to the instance.

Make this a green network and if your ip is for the network interface, then give that as static in for green and the subnet mask as and the gateway as


For more details on how to set up the individual services in NethServer, you can visit the NethServer documentation here.

Amazon Linux 2 AWS Containerization EC2 ECS Issues Linux Outage Serverless WordPress

Why there was an outage for my site recently

So initially I was running this site in an EC2 instance which costed around 1200Rs. Then I learnt about serverless deployments and made my site serverless about which you can read here.

So after going serverless, I was receiving a monthly bill of 2400Rs which was fine since I could brag about my site being infinitely scalable.

But this month (August-2020) I was looking at my billing console and got a bill of 2200Rs by august 9th. This was alarming. The issue was the google search console was sending more and more requests to my site.

This woke up my Aurora serverless RDS from suspended mode to 1 ACU, and Aurora serverless is costly.

So I switched off the ECS from 22:00 to 10:00 using a lambda function but even then I was paying for the load balancer.

So now I have changed it back to an t3a.micro instance which costs me 400Rs per month.

This is why there was an outage.

AWS EC2 Gaming GPU Instance NVIDIA Parsec Windows

PC gaming without a rig – using Parsec on Windows!

If you are like me, building a gaming rig would be one of your dreams, but to build a rig that can play high end games on ultra graphics would cost you an arm and a leg.

If you are stuck in this situation, then read on to find out how to start gaming without building a rig, and no, this is not about gaming services like stadia or shadow which are not available in India yet.


There are a few requirements that I’ll mention below that you’ll need to get this working properly.

  • AWS Account – Basic knowledge on how to launch servers
  • A good internet connection – 30-40mbps – FTTH (for low latency)
  • A job/business to pay the bill – AWS isn’t free, but it’s cheap
  • A game – (Always buy your games when they are high end because otherwise the developers would feel that they can make more money from s**t games like candy crush and they’ll stop making good games. We really don’t want that.)
  • Local machine running any of the following
    • Windows 7+
    • Google chrome
    • MacOS 10.11+
    • Android – With google play
    • Raspberry pi 3
    • Linux – Ubuntu 18.04+

Let’s get started

So to get started, first we need a Parsec logo account and then we will move on to setting up the game server in AWS.

Setting up Parsec

What is parsec?

So Parsec is like VNC/RDP on steroids. They give very low latency interactive streaming from a remote PC to your local machine.

They say they use a protocol called BUD(Better User Datagrams) which is like UDP on cocaine and they developed in house specifically for gaming but I think they are using magic and this new protocol talk is just to cover up the magic.


I hosted my server in Mumbai region in AWS and I live in thrissur, Kerala and the LOS distance between my house and Mumbai is 1100 km, so it should be twice that in network fibre length and with all those switched and routers in between, I expected it not to work but it worked! The latency I got is only 30-40 ms which is basically like gaming on your local machine. You won’t feel it. 30 ms is approximately a lag of 1 frame at 30fps which is nothing.

You can read more here

Creating an account

To setup a parsec account first sign up here. It is a very simple process

First give a username

The they’ll ask for an E-mail ID and password

Always use a strong password and get the achievement unlock

Now you’ll get a confirmation E-mail, so click the confirmation link and your account is ready.

The client

Now that your account is ready, it is time to download the client (or you can use their web client) and login.

You can download the appropriate client for your platform from here

Setting up server in AWS

You always have an option to launch an EC2 instance in the classic method i.e without VPC but I will not recommend that so for this setup, we’ll start with VPC and its sub services and then move on to create the server.

Setting up a VPC

I have covered this in my blog on serverless wordpress but I will cover the basics again here.

First login to your AWS account and go to the VPC console

Create a VPC here by giving the proper CIDR block and we don’t need IPv6 for this so you don’t have to give that but give a Name tag so that we can understand what it’s for.

Next create an internet gate way and attach it to the VPC

Then create a subnet for the EC2 instance.

Once that is done create a Public route table (we don’t need a private route table for this one) and add the internet gateway for route and also associate the created subnet

Setting up EC2

To setup the EC2 instance, first create the security group, then subscribe the required AMI from the market-place and then we have to launch the instance

Security group

Go to the security group tab in the EC2 console and enter the following rules

All traffic | All | All | Anywhere | “” / “::/0”
This is not at all secure but its convenient.
You can check here and here to get the exact port requirements.

All traffic | All | All | Anywhere | “” / “::/0”

AWS Marketplace

Since parsec does not support hosting on Linux yet, we have to use Windows. So you can subscribe the NVIDIA Gaming PC – Windows Server 2019 from the AWS Marketplace.

Login to your AWS account, go to the AWS marketplace console and subscribe the AMI.

Setting up the Server

Once the AMI is subscribed, you can launch your instance with the following configuration

AMI – NVIDIA Gaming PC – Windows Server 2019

Choose instance type is g4dn.xlarge (which is the smallest that you can choose)

VPC – Newly created VPC
Subnet – Newly created subnet
Enable Public IPv4

Shout out to my buddies at LaresAI – I’m creating this in their dev env VPC

Add a 50G-60G of additional storage – You will see a 125GB of storage by default but that is ephemeral and you’ll lose the data if the instance is stopped and started. If you add additional storage, you’ll have to format it after boot up. Otherwise you can also increase the C Drive storage from 30GB to any value that you like (Provided you are ok paying for it).

Now give a name tag which you can understand

Now specify the security group that you already created

That’s it. Now you can review and launch that instance.

You have to wait for some time for the instance to be launched and then you can get the password for the Administrator account for the Windows server.

Logging in

To login to the Windows server, go to your EC2 console, select the server and click on connect.
You will see an option to upload or copy/paste the key they you had selected when you launched the instance.

This key is required to get the automatically generated Administrator password.

If it shows some error, you have to wait for some more time for the instance to complete the setup.

Once you get the password, you can use Remmina or any RDP client to connect to the server with it’s public IP.

Once connected, install Parsec and login the same way you logged in in your client and ensure that the Hosting option is enabled

This will show your PC in parsec and you can see the same in your client

Connecting with Parsec and setting up sound

Not that the parsec is connected, you have to do a few things to get it up and running for gaming.

Disconnect from RDP and connect again using Parsec from your client. It should work. If it does not, make sure to check the trouble-shooting steps as mentioned in the Parsec website or join Discord.

Once your Parsec is connected, you’ll notice that there is no sound. This is because AWS does not attach a sound hardware since it is a server. To fix this, you can install a virtual sound hardware from here.

Click on the Install batch file to install the virtual sound drive. You can enabled spacial sound for better immersive experience while playing games.

Now you are ready to play any game


Now you can install Steam or any gaming platform.

I’m using steam. I installed Shadow of the tomb raider

Once installed you can start playing it like it is your local PC but with graphics on ultra.

You can see the screenshots from my gameplay taken from my laptop below.

Applications AWS CentOS Linux Virtual Machine Manager Virtualization

Setting up a CentOS 8 server as a virtualization host

If you are trying to setup virtual machines in a CentOS 8 server, the following are the steps.

I set this up in AWS with an m5d.metal instance (only metal instances allow direct hardware access in AWS. AWS does not support nested virtualization) but it is the same for any CentOS 8 server.

Once you get the server up and running, check if your hardware supports virtualization. If it is a new machine with a recent processor, it should support it but just check it just in case.

grep -E '(vmx|svm)' /proc/cpuinfo
lsmod | grep -i kvm

Now we install the X-Org server so that you can access the virt-manager GUI in the server on your local machine.

sudo yum install xorg-x11-xauth xorg-x11-fonts-* xorg-x11-utils

Now install all the required packages required for virtualization

sudo yum groupinstall "Virtualization Host"

Once the installation is complete, check the status of libvirtd service which is required to run the virtual machine.

sudo systemctl status libvirtd

If it is not running, then run the following command to start the service

sudo systemctl start libvirtd

Remember to enable the service so that the service starts up automatically after reboot

sudo systemctl enable libvirtd

In the current situation, only root can connect to the libvirtd daemon to create VMs but since we are planning to use the GUI and not the virt-install command line tool, it will be better to add the user to the libvirt group. This is because root doesn’t play will with X-Org server.

sudo usermod -aG libvirt centos

Now you have to log off and log in.

You can run the following command to kill all the process by the user (here it is centos) which will basically log you out.

sudo pkill -U centos

You can log back in with the -X option so that when you run the virt-manager, you can see it in your local machine.

ssh -i key.pem -X centos@IP

Once you have logged in, check if you are there in the libvirt group by running the following command.


Confirm that the installation of packages was completed successfull by running the following command.

sudo virsh version

Once all that is done, you can run the following command to start the GUI Virtual machine manager.


That’s it. You may now setup virtual machines in the server using the same steps you have been following till now with Virtual machine manager.

AWS Containerization ECS Serverless WordPress

Setting up a serverless WordPress in AWS

First step to setting this up is getting an AWS account. If you already have one, then great!, Otherwise follow the instructions here to create and activate an AWS account.

Login to your AWS account

First step to setting any resources in AWS is to create a VPC

To create VPC Navigate to Services Tab → Select VPC →then Select Your VPCs → click on “Create VPC

Specify your VPC name in Name tag , mention the IPv4 CIDR block. Click on “Create”. Your VPC will be created.

After creating VPC, DNS hostnames should be enabled. This is done so that the EFS DNS endpoint is resolved from inside the VPC and not via public internet. To enable DNS hostname, select the VPC, then click on Actions button and select Edit DNS hostname option

Check the enable option in DNS hostname and then click on Save button.

Next we need to create subnets for the load balancer, ECS, RDS and EFS

First create the subnet for the Load Balancer

To create a highly available system, the subnets for ALB is created in two separate availability zones.

Do the same for ECS

I was doing this on a 4G internet without proper network coverage so you too have to sit through my loading screens… Make sure to pause here for 5 to 10 seconds as a tribute to the people who use 2G data.

Create two subnets for EFS also

Finally create two subnets for the RDS.

As you can see the subnets that we created are in ap-south-1a and ap-south-1b availability zones. We did not used ap-south-1c availability zone only because AWS does not support Aurora serverless MySQL in ap-south-1c availability zone while I was creating this setup for this website and for the new one that is being setup for writing this blog.

Once all of that is complete, It is time to create an Internet gateway.

To create an Internet gateway, open VPC console and click on Internet Gateways and click Create internet gateway.

Enter a name for the internet gateway, and click Create internet gateway.

The internet gateway is now created as shown below. Now it should be attached to a VPC

To attach the internet gateway to a VPC, click on Actions and click on Attach to VPC

Select the VPC and click on Attach internet gateway

That marks the end of internet gateway setup.

Once that is done, the next step is to create two route tables.

One for Public internet access via the internet gateway using a public IP and another for private access only.

This acts as an extra layer of security over the security group so that resources like EFS and RDS that does not require internet access at all can stay in a private network even if they have a public IP or publically resolvable DNS name.

We shall start with a Public route table. Use the route tables section in the VPC console to create the route tables.

Here we mention the route table name as Public and specify the VPC and click create

We then repeat the same process as before to create a Private route table. It is always recommended to not use the default route table of a VPC and instead create a separate public and private route table.

Next we add the routes to the public route table with the routes to ipv4 and ipv6 ::/0 going through the internet gateway

We then associate the subnet that required internet access to this public route table.

The associations are as shown below and the unselected subnets in the public route table are associated in the private route table to ensure that no subnets are there in the default route table.

That marks the end of the VPC section.

Now we move on to the next mundane, yet most important part – The security groups. Do NOT mess this up!

To create security group navigate to services Tab → Select VPC →click on security group then click on create security group

Start with ALB Security group

Enter security group name such as SL-WP-ALB-SG then select the VPC created in first step and give the inbound and outbound rules, and then click on Create security group.

To create security group for ALB the name and description is given and HTTP, and if your are using SSL, then HTTPS ports are made open to public.

Make sure to delete the public open outbound access.

Create the security group for ECS

Allow inbound HTTP from the load balancer

Make sure to delete the outbound public rule and add HTTP and HTTPS access to public from the ECS. Also it is good to label the security group rules for future reference.

Next create the security group for EFS.

Allow NFS inbound from ECS security group in the EFS security group

Also remember to delete the outbound public access since EFS does not require any outbound access.

Next create a security group for RDS

Here only allow inbound MySQL port from ECS.

RDS also does not require outbound access and hence all outbound rules are removed.

And below is the screen that is shown when you successfully create a security group.

Now we backtrack and add the outbound rules for all the inbound security group to security group rule that we created in the previous steps.

We start with ALB and allow outbound HTTP access to ECS

We also add the outbound NFS and MySQL security group access to ECS security group.

If you have followed all those instructions carefully, you have setup a secure system with the proper security group configuration.

Now that the security groups are out of the way, we move on to the interesting parts. We shall start with an EFS.

We need EFS so that the container that we create can have persistent storage for the WordPress files like uploads, plugins etc.

To create EFS Navigate to Services Tab → Select EFS then click on Create file system

To configure network access Select the VPC created in first step and select subnet and security group created for EFS in previous steps and then click on Next step

In Add tags, name your file systems in value and then click on next steps.

To configure client access, click on add access point then enter user name , directory path as /WP and set User ID , Owner User ID,Group ID, Owner Group ID, Secondary Group IDs as 1000 and Permissions as 755 and then click on Next step.

Finally click on Create File System

EFS setup is now complete and it’s creation will take some time.

Now while we wait for the EFS creation to complete, we can move on to RDS

To create RDS we need to create DB Subnet group for this navigate to Services Tab → Select RDS.

Then click on subnet groups to create DB subnet group,enter subnet group name such as sl-wp-db and descriptions, select the vpc created in first step.

In Add subnets select two availabilty zones corresponding to two subnets created earlier for RDS and then click on create.

Make sure the subnets are in ap-south-1a and ap-south-1b since Aurora serverless is supported only in those two availability zones in the Mumbai region when I tried to set it up.

After successfully creating subnet group , click on databases options on the left side then click on create database

Choose a database creation method as standard, then select engine type as Amazon aurora and the DB version as 5.6.10a or 5.7-2.7.1 as serverless only supports those two versions and remember to choose Serverless

In DB cluster identifier enter db name and credentials settings enter db username and db password. You can also select the required minimum and maximum capacity

Select the VPC and choose security group as existing group and select the security group created for RDS in connectivity. You can also enable the Data API which allows running queries from the AWS console. It is also free of charge.

You can create an initial database called wordpress.

Then click on create database.

Now the RDS setup is complete and it will take some time to be available.

Now the RDS is being created and it might take some time. So we will setup the ALB during this time

Go to the EC2 Console and click Load balancer

Create an Application load balancer

Since I am not adding an SSL and mapping a domain, I am only going to use an HTTP (80) listener.

Choose the right VPC and the subnets that we created previously for ALB.

In the next window you will see this error, this is because we did not enabled HTTPS. If HTTPS listener was enabled, this will be where you specify the SSL certificate and the security policy for SSL.

On the next page, specify the security group that we created for ALB.

Next create a target group with IP target type.

Skip the step to add targets

Review your settings

Click create and your ALB will start provisioning.

Now that the ALB is setup, we can move on to the Serverless WordPress part. The ECS setup.

First open up the ECS console

Create a new cluster with Networking only option.

Give a name for your cluster

That’s it, a cluster is created.

Now we have to create a task definition. Task definition is what specifies the container settings like image, volume, CPU, memory etc.

Create a task definition with FARGATE compatibilty.

Give a name for your task definition. You can also add a role if your container needs to access AWS services in the Task Role section.

You’ll have to create a task execution rule and mention it.

Now set the required compute capacity for the container. Here the memory is set as .5 GB and there is a .25 vCPU. Do note the possible combination of CPU and Memory that is allowed.

Now since we are going to add an EFS volume for persistant storage, scroll down and click on Add volume

Give a name and choose EFS instead of Bind Mount. Bind Mount is for ECS EC2.

Select the file system and the access point ID, also enable the transit encryption and click on Add

Once that is over, scroll back and click the Add containers button. This is where we will configure the container settings.

So, first start with a name for the container, then move on to the image URL. It can be ECR or Docker hub image.

You can set a soft memory limit if required and then in the port mapping section, enter the port 80 since this is HTTP.

Since this is Fargate, you cannot set the host port. You can only expose the container port.

Once that is down scroll down and enter the following environment variables. This is from the WordPress docker image site.

WORDPRESS_DB_HOST: <rds endpoint>
WORDPRESS_DB_NAME: <name of the db created in RDS>
WORDPRESS_DB_PASSWORD: <strong password>
WORDPRESS_DB_USER: <mysql user for wordpress>

Next scroll down to the STORAGE AND LOGGING section and click the drop down next to mount points.

Select the volume that was previously created and enter the mount point as


Since this is where the wordpress contents will reside.

Now scroll down and click Add.

Now click on create to create the task definition.

As you can see, the first revision of your task definition has been created.

Now move back to the cluster, and create a new Service.

Specify it as being FARGATE and select the Task definition.
Next is the important part.

You have to select the platform version as 1.4.0. If you select LATEST, it is actually 1.3.0. I don’t know why it is setup like that but the EFS support was brought in 1.4.0 and hence if LATEST or 1.3.0 is selected you’ll get an error saying that some capability (they won’t say which) in the task definition is not compatible with the chosen platform version.

After giving the service name and the number of tasks (we can start with 1), we move on to the next section which is the network configuration.

Select the VPC and the subnets.

Next select the security group we created for ECS.

Now click on Application Load balancer and set the health check grace period as 60 seconds.

Select the load balancer that was previously created anc select the container name:port and click on Add to load balancer.

Next select the target group that we had previously created.

That will fill up the rest of the fields and we can move on.

Now we can set up the autoscaling. In this setup I am not setting an auto-scaling but it is as simple as giving the maximum minimum and the desired number of cluster nodes.

Now review your configuration and once you are satisfied, you can click Create Service.

Now the service is created, wait for the containers to spin up.

Once the container is running, copy the DNS name of the ALB and paste it in your web browser and hit enter.
That’s is you have completed the Infra side setup for wordpress.

Now it’s time to complete the WordPress setup

Select the language.

Since we had already given the DB details as environment variable, it was not asked during the setup.

Now enter the Site title, admin username, password and the email.

That’s it, WordPress installation is now complete.

Login to view your dashboard.

You can see the URL of the ALB here.

And this is the home page.

And that’s it, you have setup a serverless wordpress in AWS.

If you have more requirements than this, you can contact me or Innovature (where I work) and drop a message and we will contact you.