Blog

  • Integrating DataDog with AWS for Real-Time Monitoring and Logging

    Introduction

    DataDog is a powerful monitoring and analytics platform that helps businesses track the performance of their infrastructure. When integrated with AWS EC2, it provides real-time monitoring, allowing you to gain insights into your instances’ performance, troubleshoot issues, and manage logs effectively. In this tutorial, we’ll walk through the steps to integrate AWS EC2 with DataDog and view your logs on the DataDog dashboard.

    Prerequisites
    Before proceeding, ensure that you have the following:
    An active AWS account.
    Access to an EC2 instance.
    A DataDog account (sign up if you don’t have one yet.

      Step 1: Set Up DataDog API Key
    1.Log in to your DataDog account.
    2.Navigate to the Integrations tab.
    3.Under the API Keys section, click on New Key.
    4.Copy the generated API Key. You’ll need this for the integration.

        Step 2: Install the DataDog Agent on AWS EC2
    You can install the DataDog agent on your EC2 instance by following these steps:

    1. SSH into your EC2 instance
      ssh -i your-key.pem ec2-user@your-ec2-public-ip
    2. Update your system.
      sudo yum update -y
    3. Install the DataDog agent
      For Amazon Linux 2 (use the appropriate version for your OS):
      DD_AGENT_MAJOR_VERSION=7 DD_API_KEY=your_datadog_api_key sudo sh -c "DD_AGENT_VERSION=7.31.1 \
      echo \"deb https://apt.datadoghq.com stable 7\" > /etc/apt/sources.list.d/datadog.list && \
      curl -L https://www.datadoghq.com/keys/datadog.asc | apt-key add -"
      sudo apt-get install datadog-agent
    4. Start the DataDog agent.
      sudo systemctl start datadog-agent

      Step 3: Verify DataDog Agent Is Running
      To verify if the agent is working properly:

      sudo datadog-agent status

      Step 4: Configure Logs to Be Sent to DataDog
      Once the agent is installed, you can configure it to send logs from your EC2 instance to DataDog.

      1. Enable log collection in the DataDog agent configuration file
      Open the configuration file:

     sudo vim /etc/datadog-agent/datadog.yaml

    2.Enable log collection
    Find the logs_enabled line and set it to true.

    logs_enabled: true

    3. Restart the DataDog agent

    sudo systemctl restart datadog-agent

    4.Configure the log source
    Now, configure your EC2 instance’s log sources to be collected by the agent:

    sudo vim /etc/datadog-agent/conf.d/<log_source>.yaml

      Step 5: Monitor Logs on DataDog Dashboard
    Once the DataDog agent is collecting logs from your EC2 instance, log in to your DataDog       dashboard, go to the Logs section, and you should see the logs coming from your EC2 instance. You   can filter, search, and visualize logs to monitor your application performance in real-time.

  • How to Install ERPNext on Ubuntu: A Complete Walkthrough for Beginners

    Introduction

     

    Are you running a small or medium enterprise and looking for a way to integrate your essential business processes? ERPNext, a free and open-source Enterprise Resource Planning (ERP) solution, can streamline your operations by centralizing and automating key functions. This guide will walk you through the step-by-step process of installing ERPNext on your Ubuntu server, unlocking enhanced efficiency and productivity for your team.

    What is ERPNext?

    ERPNext is a robust, open-source ERP platform designed to simplify and automate business operations. It offers integrated tools for accounting, inventory management, customer relationship management (CRM), human resources, and project management. By centralizing data, providing real-time insights, and enabling seamless collaboration, ERPNext is a flexible and cost-effective solution for businesses of all sizes.

    Prerequisites

    • Ubuntu Server: A fresh installation of Ubuntu 22.04.

    • Hardware: Minimum 2 CPU cores, 4GB RAM, and 20GB SSD storage (SSD recommended for best performance).

    • Timezone: Ensure your server’s timezone matches your location. Check with:

    date

    If needed, update the timezone:

    sudo timedatectl set-timezone "TIMEZONE"

    Replace TIMEZONE with your region, such as Asia/Kolkata or Europe/Copenhagen.

    Step-by-Step Installation

    1. Update System Packages

    Start by updating all existing packages:

    sudo apt-get update && sudo apt-get upgrade -y

    2. Create a New User

    Replace yourusername with your preferred username:

    sudo adduser yourusername
    sudo usermod -aG sudo yourusername
    su yourusername
    cd /home/yourusername/

    This helps keep your system secure by avoiding root user exposure.

    3. Install Essential Packages

    • Fail2ban: Protects your server from unauthorized access:

    sudo apt-get install fail2ban -y
    sudo systemctl start fail2ban
    • Supervisor: Manages long-running processes:
    sudo apt update && sudo apt install supervisor -y
    • Git: Version control system:
    sudo apt-get install git -y
    git --version
    • Python: Required for ERPNext dependencies:
    sudo apt-get install python3-dev python3.10-dev python3-setuptools python3-pip python3-distutils python3.10-venv -y
    python3 -V
    • MariaDB: Preferred database for ERPNext:
    sudo apt install mariadb-server mariadb-client -y
    • cURL: For transferring files:
    sudo apt install curl -y
    • Node.js: For JavaScript runtime:
    curl -fsSL https://deb.nodesource.com/setup_20.x -o nodesource_setup.sh
    sudo -E bash nodesource_setup.sh
    sudo apt install nodejs -y
    sudo npm install -g yarn
    node --version
    • Redis: For caching:
    sudo apt-get install redis-server -y
    sudo systemctl start redis-server.service
    • Additional Utilities:
    sudo apt-get install software-properties-common -y
    sudo apt-get install libmysqlclient-dev -y
    sudo apt-get install xvfb libfontconfig wkhtmltopdf -y

    4. Configure MariaDB

    • Secure your database installation:
    sudo mysql_secure_installation
    Answer the prompts as needed, except for “Disallow root login remotely?”—select N to allow remote access if required.
    • Edit the MySQL configuration file:
    sudo nano /etc/mysql/my.cnf
    Add the following lines at the end:
    
    [mysqld]
    character-set-client-handshake = FALSE
    character-set-server = utf8mb4
    collation-server = utf8mb4_unicode_ci
    
    [mysql]
    default-character-set = utf8mb4
    
    
    • Save and restart MySQL:
    sudo service mysql restart

    5. Install Frappe Bench

    • Frappe Bench is a command-line tool for managing ERPNext installations:
    sudo pip3 install frappe-bench
    bench --version
    bench init --frappe-branch version-15 frappe-bench
    cd /home/yourusername/frappe-bench/
    chmod -R o+rx /home/yourusername/

    6. Create a New Site

    • Replace mysite with your preferred site name:
    bench new-site mysite.local
    • Set your MySQL root password and an admin password when prompted.

    7. Install ERPNext and Modules

    • Install the payments app and ERPNext:
    bench get-app payments
    bench get-app --branch version-15 erpnext
    • Optionally, add the HR module:
    bench get-app hrms
    • Check installation status:
    bench version --format table
    • Install ERPNext and HR modules to your site:
    bench --site mysite.local install-app erpnext
    bench --site mysite.local install-app hrms
    • Enable the scheduler:
    bench --site mysite.local enable-scheduler
    bench --site mysite.local set-maintenance-mode off

    8. Set Up NGINX and Security

    • Configure NGINX:
    bench setup nginx
    sudo supervisorctl restart all

    If you encounter deprecated Ansible playbook entries, update /usr/local/lib/python3.10/dist-packages/bench/playbooks/roles/mariadb/tasks/main.yml by replacing – include with – include_tasks.

    • Set up production configuration:
    sudo bench setup production yourusername
    • Configure the firewall:
    sudo ufw allow 22,25,143,80,443,3306,3022,8000/tcp
    sudo ufw enable

    Access your ERPNext instance at http://your-server-ip:80 using the admin credentials you set.

    9. Add a Domain and SSL Certificate

    • DNS Setup: Add an A record for your domain pointing to your server’s IP.
    • Enable Multitenancy:
    bench config dns_multitenant on
    • Add Domain:
    bench setup add-domain yourdomain.com --site yoursitename
    • Reconfigure NGINX:
    bench setup nginx
    sudo service nginx reload
    • Install SSL with Certbot:
    sudo apt update && sudo apt install snapd -y
    sudo snap install core
    sudo snap refresh core
    sudo snap install --classic certbot
    sudo ln -s /snap/bin/certbot /usr/bin/certbot
    sudo certbot --nginx

    Follow the prompts to secure your domain.

    Conclusion
    By following these instructions, you will have a fully operational ERPNext instance running on your Ubuntu server, ready to streamline your business operations. Take time to explore ERPNext’s features and customize them to fit your organization’s needs.

  • Step-by-Step Guide to Setting Up Your Own Mastodon Server

    What is Mastodon?

    Mastodon is a decentralized social network comprised of separate servers focused on specific themes, topics, or interests. Users can join these servers, follow others, engage in conversations, and perform activities similar to what you’d typically do on platforms like Twitter.

    Mastodon has been in existence since March 2016, but it gained significant popularity in late 2022, thanks to a notable acquisition of a particular social media platform.

    A Mastodon website can function independently, much like a regular website. Users sign up, post messages, share pictures, and have conversations, similar to what you’d expect on any website. However, unlike traditional websites, Mastodon websites can connect and interact with each other. This means that users from different Mastodon websites can communicate, much like how you can send an email from your Gmail account to someone using Outlook, Fastmail, Protonmail, or any other email service, as long as you have their email address. In the same way, you can mention or send messages to anyone on any Mastodon website if you know their username or address.

    Mastodon vs. Twitter: Exploring the Differences in Social Media Platforms

    In the world of social media, Mastodon stands out because it’s all about people having control and being part of a community. It’s not like Twitter, which is really popular but has some problems like keeping your data private and not being clear about how its rules work.

    Mastodon is different. It’s like a friendly neighborhood where everyone helps make the rules. On Twitter, special computer rules decide which posts you see first, which can make you only see similar things and never have a real conversation. But on Mastodon, you see posts in the order they were made, which makes it open and friendly.

    What’s more, on Mastodon, you can even make your own neighborhood and have your own rules. You and your neighbors decide what’s okay and what’s not. This makes Mastodon a unique and user-friendly social place.

    The Benefits of Hosting Your Own Instance

    When it comes to Mastodon, you have choices: join an existing group or start your own. Let’s break it down. Joining an existing group is like moving into a ready-made neighborhood; you’re part of a community right away. But you can’t customize much. On the other hand, having your own group has lots of benefits. You’re the boss; you can make it look and work as you like. You decide the rules, creating a cozy place for you and your friends. Your data is yours, so privacy is no problem.

    Creating Your Own Mastodon Server 

    Before you start your own Mastodon community, remember a few things. You’ll need a server or instance. Also, you must have a proper domain. You can get a domain from any domain provider, and they offer different servers, too. The server you choose depends on how many people will use your Mastodon and how active it is. If you want more options, they have different server configurations too.

    Pre-requisites

    • A computer running Ubuntu 20.04 or Debian 11, where you have full administrative access (root access).
    • Choose a domain name (or subdomain) for your Mastodon server, for example, example.com.
    • An email delivery service or another SMTP server.

    Creating a mastodon System User and granting it administrative privileges

    To begin, go to the root and create your Mastodon user using the ‘adduser’ command.

    sudo su -
    adduser mastodon

    Following that, add your Mastodon user to the sudo group to provide it with administrative privileges:

    usermod -aG sudo mastodon

    Setting Up Mastodon: Installing Dependencies and Cloning the Repository

    Before proceeding, ensure that you have installed curl, wget, GnuPG, apt-transport-https, lsb-release, and ca-certificates.

    apt install -y curl wget gnupg apt-transport-https lsb-release ca-certificates
    

    Node.js

    curl -sL https://deb.nodesource.com/setup_16.x | bash -

    install Node.js 16.x and npm

    apt-get install -y nodejs

    You might also require development tools for creating native add-ons.

    apt-get install gcc g++ make

    To install the Yarn package manager, run:

    curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null
    echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
    apt-get update
    apt-get install yarn
    wget -O /usr/share/keyrings/postgresql.asc https://www.postgresql.org/media/keys/ACCC4CF8.asc
    echo "deb [signed-by=/usr/share/keyrings/postgresql.asc] http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/postgresql.list

    Install System packages

    apt update
    apt install -y \
      imagemagick ffmpeg libpq-dev libxml2-dev libxslt1-dev file git-core \
      g++ libprotobuf-dev protobuf-compiler pkg-config nodejs gcc autoconf \
      bison build-essential libssl-dev libyaml-dev libreadline6-dev \
      zlib1g-dev libncurses5-dev libffi-dev libgdbm-dev \
      nginx redis-server redis-tools postgresql postgresql-contrib \
      certbot python3-certbot-nginx libidn11-dev libicu-dev libjemalloc-dev

    To use yarn, activate Node.js’s corepack, which is included by default.

    corepack enable

    Yarn has two distinct development paths, one being version 1.0 (known as classic), and the other being version 2.0. Mastodon specifically needs the classic version of yarn. You can enable this classic version using the command: “yarn set version”.

    yarn set version classic

    Installing Ruby

    To make managing Ruby versions simpler and more efficient, we’ll use rbenv. With rbenv, it’s easier to obtain the correct Ruby versions and update them when new releases become available. However, it’s important to note that rbenv must be installed for a specific Linux user. Therefore, the first step is to create the user that Mastodon will run as.

    switch to mastodon user

    su - mastodon
    git clone https://github.com/rbenv/rbenv.git ~/.rbenv
    cd ~/.rbenv
    src/configure 
    make -C src
    echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
    echo 'eval "$(rbenv init -)"' >> ~/.bashrc
    exec bash
    git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
    RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.2.2

    To set this Ruby version as the new default.

    rbenv global 3.2.2
    gem install bundler --no-document

    If you find the new version of Gem use this command to install

    gem update --system 3.4.20

    Setting up PostgreSQL

    You should create a PostgreSQL user for Mastodon. In a basic setup, the simplest approach is to use “ident” authentication. This means that the PostgreSQL user doesn’t have a separate password and can be accessed by the Linux user with the same username.

    sudo -u postgres createuser --interactive
    Output
    Enter name of role to add: mastodon
    Shall the new role be a superuser? (y/n) y

    Setting up Mastodon

    To obtain the latest stable release of Mastodon, use Git to download the code.

    cd /home/mastodon/ 
    git clone https://github.com/mastodon/mastodon.git live
    cd live
    git checkout $(git tag -l | grep '^v[0-9.]*$' | sort -V | tail -n 1)

    Installing the last dependencies

    bundle config deployment 'true'
    bundle config without 'development test'
    bundle install -j$(getconf _NPROCESSORS_ONLN)
    yarn install --pure-lockfile

    The two “bundle config” commands are necessary only during the initial dependency installation. If you plan to update or reinstall dependencies at a later time, you can simply use “bundle install.”

    Run the interactive setup of Mastodon

    RAILS_ENV=production bundle exec rake mastodon:setup

    Please enter your domain name. make sure your input is accurate because changing it later can be challenging.

    Mastodon offers a single-user mode where registrations are disabled. If your goal is to create a shared space with others, it’s important not to enable the single-user mode.

    Your instance is identified by its domain name. Changing it afterward will break things.
    Domain name: example.com
    
    Do you want to enable single user mode? (y/N) N
    
    Are you using Docker to run Mastodon? (Y/n) N

    During the PostgreSQL setup, you can keep all the settings as their defaults by simply pressing Enter for each prompt. Your database doesn’t rely on a password for authentication; instead, it uses peer authentication, which we’ll explain further later on. For now, you can leave these settings empty.

    PostgreSQL host:(/var/run/postgresql) /var/run/postgresql
    PostgreSQL port: (5432) 5432
    Name of PostgreSQL database: (mastodon_production) mastodon_production
    Name of PostgreSQL user: (mastodon) mastodon
    Password of PostgreSQL user: 
    Database configuration works!

    Since this database is meant to be accessible only locally on this server, there’s no need to set up a password so, you can keep the Redis settings at their default values, and you can leave the password field empty. Just press Enter for each prompt.

    Redis host:(localhost) localhost
    Redis port:(6379) 6379
    Redis password: 
    Redis configuration works!

    In the next step, you have two options: use local storage or set up cloud storage with various providers. you’ll be prompted to decide how you want to handle file storage for Mastodon. If you prefer local storage please select “No.” if you want to store on cloud storage an immediate cloud storage solution is ready.

    Do you want to store uploaded files on the cloud? (y/N) N

    Mastodon needs access to an SMTP email server to send emails. Maintaining a secure and reliable email server is challenging, so an external provider is recommended. It’s best to use an external email service provider. Note that most charges are based on usage, and some offer a free tier up to a certain limit. Use your discretion in this situation.

    When using email providers, you’ll need to create and configure an account. After signing up, you typically have to verify your domain by adding DNS records. You’ll also require an API key for secure authentication. The exact process varies by provider.

    Do you want to send e-mails from localhost? No
    SMTP server: (smtp.mailgun.org) smtp.mailgun.org
    SMTP port: (587) 587
    SMTP username:
    SMTP password: 
    SMTP authentication: (plain) 
    
    SMTP OpenSSL verify mode: none
    
    Enable STARTTLS: auto
      
    E-mail address to send e-mails "from":mastodon<[email protected]>
    
    Send a test e-mail with this configuration ? (Y/n)Y
    
    Do you want Mastodon to periodically check for important updates and notify you? (Recommended) (Y/n) Y
    
    This configuration will be written to .env.production
    Save configuration? (Y/n) Y
    
    Now that configuration is saved, the database schema must be loaded.
    If the database already exists, this will erase its contents.
    Prepare the database now? Yes
    Running `RAILS_ENV=production rails db:setup` ...
    
    
    Created database 'mastodon_production'
    Done!
    
    The final step is compiling CSS/JS assets.
    This may take a while and consume a lot of RAM.
    Compile the assets now? Yes
    

    Your Mastodon server is ready, but it’s not running yet. You’ll be prompted to create an administrator account for logging into your Mastodon server.

    Do you want to create an admin user straight away? (Y/n) Y
    Username: (admin) 
    E-mail: 

    To check the contents of your generated Mastodon configuration file, open it using the “nano” text editor or your preferred text editor.

    vim ~/live/.env.production

    Setting Up Nginx for Mastodon with Certbot

    Setting up Certbot

     sudo certbot certonly --nginx -d <your_domain>

    Setting up Nginx

    sudo cp /home/mastodon/live/dist/nginx.conf /etc/nginx/sites-available/mastodon
    sudo ln -s /etc/nginx/sites-available/mastodon /etc/nginx/sites-enabled/mastodon

    Open the file located at /etc/nginx/sites-available/mastodon in a text editor.

    sudo vim /etc/nginx/sites-enabled/mastodon 

    Inside this file, replace all instances of “example.com” with your own domain name. Make any other necessary adjustments according to your configuration needs. Look for lines beginning with “ssl_certificate” and “ssl_certificate_key.” Remove the “#” symbol at the beginning of these lines to uncomment them. Then, replace the path in these lines with the correct file paths for your SSL certificate and key, which should correspond to your domain name. Save the file, after making the changes.

    sudo systemctl reload nginx

    Setting up systemd services

    Please copy the systemd service templates from the Mastodon directory and paste in the systemd/system directory

    sudo cp /home/mastodon/live/dist/mastodon-*.service /etc/systemd/system/

    If you made any changes from the default settings, please double-check that the username and file paths are accurate.

    sudo vim /etc/systemd/system/mastodon-*.service

    Lastly, begin and activate the new systemd services.

    sudo systemctl daemon-reload
    sudo systemctl enable --now mastodon-web mastodon-sidekiq mastodon-streaming
    sudo systemctl start mastodon-sidekiq mastodon-web

    These services will now start automatically during boot.

    To access your mastodon toy need to copy your domain and search on the browser it will open your mastodon account click on login and login with the credentials you get while setting up mastodon.

  • Setup Wireguard VPN on Ubuntu

    WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks, and was designed with the goals of ease of use, high speed performance, and low attack surface.

    Server Side Setup :

    • Step 1 — Installing WireGuard and Generating a Key Pair
    $ sudo apt update
    $ sudo apt install wireguard -y
    • Step 2 – Configuring the WireGuard server
     $ sudo  -i
     $ cd /etc/wireguard/
     $ umask 077; wg genkey | tee privatekey | wg pubkey > publickey
     $ ls -l privatekey publickey
     $ cat privatekey
     $ cat publickey
     $ sudo vim /etc/wireguard/wg0.conf
    • Server wg0.conf file code
    # local settings for WireGuard Server
    [Interface]
    PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA= 
    Address = 10.0.0.2/32
    ListenPort = 51820
    
    # IP forwarding
    PreUp = sysctl -w net.ipv4.ip_forward=1
    # IP masquerading
    PreUp = iptables -t mangle -A PREROUTING -i wg0 -j MARK --set-mark 0x30
    PreUp = iptables -t nat -A POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
    PostDown = iptables -t mangle -D PREROUTING -i wg0 -j MARK --set-mark 0x30
    PostDown = iptables -t nat -D POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
    
    # firewall local host from wg peers
    PreUp = iptables -A INPUT -i wg0 -m state --state ESTABLISHED,RELATED -j ACCEPT
    PreUp = iptables -A INPUT -i wg0 -j REJECT
    PostDown = iptables -D INPUT -i wg0 -m state --state ESTABLISHED,RELATED -j ACCEPT
    PostDown = iptables -D INPUT -i wg0 -j REJECT
    # firewall wg peers from other hosts
    PreUp = iptables -A FORWARD -o wg0 -m state --state ESTABLISHED,RELATED -j ACCEPT
    PreUp = iptables -A FORWARD -o wg0 -j REJECT
    PostDown = iptables -D FORWARD -o wg0 -m state --state ESTABLISHED,RELATED -j ACCEPT
    PostDown = iptables -D FORWARD -o wg0 -j REJECT
    
    # remote settings for Justin's Workstation
    [Peer]
    PublicKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
    AllowedIPs = 10.0.0.1/32 
    • Note
      1. Replace the PrivateKey value with the private key you generated on the server, and the PublicKey value with the key that will be generated during the client-side setup.
      2. Make sure in the Address value, put the range of your vpc.
      3. Please change the ‘AllowedIPs’ value to the assigned addresses for the client server.

    Step 3:- Start, Stop, and Status Wireguard server

    $ sudo systemctl start [email protected]
    $ sudo systemctl stop [email protected]
    $ sudo systemctl status [email protected]
    • Verify :- Server Connection
    $ sudo wg
    $ ifconfig wg0
    • Step 4:- VPN Server aws security group open custom UDP port :
    Port - 51820

    Client Side Setup:-

    • Step 1 — Installing WireGuard and Generating a Key Pair
    $ sudo apt update
    $ sudo apt install wireguard -y
    • Step 2 – Configuring the WireGuard server
    $ sudo -i
    $ cd /etc/wireguard/
    $ umask 077; wg genkey | tee privatekey | wg pubkey > publickey
    $ ls -l privatekey publickey
    $ cat privatekey
    $ cat publickey
    $ sudo vim /etc/wireguard/wg0.conf
    • Client wg0.conf file code
    # local settings for Workstation
    [Interface]
    PrivateKey = cNNHgtsXZXG0cJ7lL5mfEBL3fDaZM6hKNePQu0jCTkU= # PrivateKey of client
    Address = 10.0.0.2/32  # client wg0 ip address
    
    
    # remote settings for WireGuard Server
    [Peer]
    #server publickey
    PublicKey = FDpMPkKH9ldTeipFZB08bizAnbSgWP/lmmgXQMTRil4=
    Endpoint = 3.18.54.161:51820 # serverip:port
    AllowedIPs = 10.10.0.0/16 # server VPC ip address
    • Step 3:- Start, Stop, and Status Wireguard server
     $ sudo systemctl start [email protected]
     $ sudo systemctl stop [email protected]
     $ sudo systemctl status [email protected] 
     $ sudo systemctl start [email protected]
    • Verify :- Server Connection
    $ sudo wg
    $ ifconfig wg0
  • Exploring Docker Architecture and Command Usage

    Component of docker architecture

    Docker uses a client-server architecture, which allows you to interact with Docker through a command-line interface (CLI) or a graphical user interface (GUI) while the Docker daemon (server) does the actual work of managing containers. Here’s an overview of the Docker architecture:

    Docker Client

    This is the interface that a user interacts with. It’s usually a command-line tool (CLI) or a GUI. When you issue a command like “run a container,” the Docker Client translates it into a request to the Docker Daemon.

    Docker Daemon

    The Docker Daemon is a background process running on your system. It’s responsible for the heavy lifting of container management. It listens for API requests from the Docker Client and acts on them. It can create, run, and manage containers, as well as handle tasks like image management, containerizing, data volumes, and network configuration.

    Docker Registry

    Docker images, which are important snapshots of applications and their dependencies, are stored in Docker Registries. These are like online repositories where you can upload, share, and download container images. Docker Hub is a popular public registry, but you can also have private ones for your organization.

    Default Registry: Docker comes with a default registry called the Docker Hub. It’s like a public library that contains a vast collection of images for various programming languages and platforms. When you request an image in Docker, it first checks the Docker Hub to see if it’s available there. This is like going to a public library to find a book.

    Private Registry: Just as you can have your personal book collection at home, you can also have your private Docker registry. This is like having your own library where you store images that are specific to your needs, like custom applications or configurations. You can configure

    commands:
    to log to the docker hub,

    docker login
    Username:
    Password:

    Docker Images

    Images in Docker serve as templates or snapshots of a file system with a specific set of files and configurations.

    Application Dependencies: Images specify all the libraries, binaries, and resources required for an application to run. This includes everything from the base operating system to any additional software packages or libraries that an application needs to run.

    Launch Processes: Images define the initial command or process that should be executed when a container is started. This command often kicks off the main application or service within the container.

    Commands

    To see all the images present in your local machine run

    docker images

    To find out images in Docker Hub.

    docker search <image_name>

    To download/pull Images from Docker Hub to your local machine.

    docker pull <image_name>

    To push a Docker image to a registry i.e. Docker Hub

    docker push <image_name>:<tag>

    To add or change tags for a Docker image

    docker tag <source_image>[:<source_tag>] <target_image>[:<target_tag>]
    • <source_image>- image id of that image which you want to tag.
    • :<source_tag>- If the source image has a tag, you can specify it here. If not leave it default to latest. used when you want to change the tag.
    • <target_image>This is the new name you want to give to your image
    • <target_tag>-tag you want to give to your image

    To get detailed information about a Docker image, including its layers and metadata.

    docker image inspect <image_name/id>:<tag>

    To remove a Docker Image you can use.

    docker rmi <image_name_or_id>

    Docker Build

    Docker Build is a core feature of Docker Engine used extensively in software development. At its core, it’s a command that facilitates the creation of Docker images. This process involves taking your application’s source code, along with necessary dependencies and configuration, and packaging it into a standalone, runnable image.

    Docker begins with a Dockerfile.

    Docker File

    A Dockerfile is like a set of instructions written in plain text. It tells a computer what to do step by step to create a special kind of package called an “image.” This image can contain everything an application needs to run.

    So, with a Dockerfile, you can automate the process of building this package. It’s like giving your computer a recipe to follow. It reads the Dockerfile and carries out each instruction to create the image you want. This makes it easier to create and manage these packages for your applications.

    Here are the most common types of instructions used to build a Docker File,

    FROM base_image:tag  # A base image use to create the container 
    
    MAINTAINER="Your Name"  # author/owner name 
    
    WORKDIR /app   # Set the working directory inside the container where instruction will be executed
    
    COPY source_path destination_path   # Copy files from the host into the container
    
    RUN apt-get update && apt-get install -y package1 package2   # Executes a command during the image build process. Commonly used for installing packages and dependencies.
    
    EXPOSE port_number   # Expose ports for the container (for runtime). Informs Docker that the container will listen on the specified port at runtime. It doesn't actually publish the port.
    
    ENV VAR_NAME=value   # Sets environment variables inside the container.
    
    CMD ["executable", "arg1", "arg2"]  # It defines the default command to run when a container is started. It can be overridden at runtime.only one CMD per Dockerfile.
    
    VOLUME ["/data"]  # Define a volume for data storage, it helps in creating a mount point and marks it for storing persistent data.
    
    SHELL ["/bin/bash", "-c"]   # Execute a shell command during build..
    
    ENTRYPOINT ["executable", "arg1", "arg2"]   # Set the entry point to a script. The ENTRYPOINT instruction specifies the command that should be run when the container starts. It is often used for specifying the main executable or entry point of your application.

    Building images on your local machine

    • let’s create a docker file to run Hello World
    • the first step is to create a docker file in a particular folder named Dockerfie
    • Edit the docker file and write a code to build an image
    FROM ubuntu
    MAINTAINER <owner_name>
    RUN apt-get update
    CMD ["echo","Hello World"]

    build your image from the docker file using the below command

    #when you are present in tne same directory where dockerfile present 
    docker build <image_name>:<tag> .
    #when you are in different directory yauneed to specify thepath of your dockerfile
    docker build /path/to/dockerfile/ <image_name>:<tag> 

    Use the docker images command to see the image that you build. you will find your image with the name that you give and a unique Image ID.

    Containers

    In Docker, containers are lightweight, self-contained packages for running software. They include everything an application needs, such as code, libraries, and settings. Act as an executable instance of Docker images.

    Images: Docker images are like blueprints that contain all the instructions for creating a container. They include the application code, dependencies, and configuration.

    Containers: Containers are the actual running instances of Docker images. When you start a Docker container, you’re essentially using an image to create a live environment to run your application. Each container is isolated from others, so they don’t interfere with each other.

    Isolation: Docker containers are isolated from each other, much like how objects in code are encapsulated and don’t affect each other’s data.

    Commands

    To show all the docker processes actively running containers “ps stands for process status”,

    docker ps

    To see the all containers running in the background

    docker ps -a

    This command is used to run a container with the help of an image.

    docker run --name <container_name> <image_id/name> 

    This command is used to view detailed information about a container.

    docker inspect <container_name/id>

    Start containers from an image
    –name defines the name of the containers
    -d define container to run in detach mode i.e. running in the background
    -it defines the continuous running of the containers

    docker run --name <container_name> -it -d <imag_ID/Name>

    Start, Stop, and restart Containers:

    docker start <container_ID/Name>
    docker stop <container_ID/Name>
    docker restart <container_ID>

    Runs a command inside a running container.
    To go inside the container.

    docker exec -it <container_ID> <command>
    docker attach <container_ID>

    To view details of containers

    docker inspect <container_ID>

    To remove/delete container.
    -f for forcefully removing the container
    Removes all stopped containers, freeing up disk space

    docker rm <container_ID>
    docker container prune

    to display the logs of the container,

    docker logs <container_ID>

    Copies files from your local machine to a container, or vice versa.

    docker cp <local_path> <container_ID>:<container_path>

    To create a volume to store data, volume is basically used to store the data outside the container
    -v used to specify a volume to be mounted to the container

    docker run -v <volume_name>:<container_path> <image>

    Shows real-time resource usage statistics of a container.

    docker stats <container_ID>

    Registries

    A registry is a library or a storage place for images, which are the building blocks of containers.

    Default Registry: Docker comes with a default registry called the Docker Hub. It’s like a public library that contains a vast collection of images for various programming languages and platforms. When you request an image in Docker, it first checks the Docker Hub to see if it’s available there. This is like going to a public library to find a book.

    Private Registry: Just as you can have your personal book collection at home, you can also have your private Docker registry. This is like having your own library where you store images that are specific to your needs, like custom applications or configurations. You can configure

    Docker engine

    The Docker Engine is like the heart of Docker, where everything happens. It’s the software that makes containers work on your computer.

    Docker Daemon: It runs on your computer and does all the heavy lifting. It’s responsible for creating, managing, and running containers.

    Docker Client: It’s a command-line tool you use to tell the Docker Daemon what to do. You give it special commands, and it communicates with the Daemon to make things happen.

    Docker compose

    Docker Compose is a valuable tool for defining and running multi-container Docker applications. It simplifies the process of configuring and managing complex applications by utilizing a YAML file to specify the setup of your application’s various services. This configuration file acts as a blueprint, and with a single command, you can create and launch all the services according to the defined configuration.

    The significant benefit of using Docker Compose is that you can specify your application’s entire setup in a single file, which you keep at the root of your project’s repository. This file can be version-controlled, making it easy for others to collaborate on your project. All someone needs to do is clone your repository and start the application using Docker Compose. As a result, you may find many projects on platforms like GitHub and GitLab adopting this approach.

    command to install docker on Ubuntu

    sudo apt install docker-compose

    To check the version of the docker

    docker-compose --version

    Docker-compose.yml File

    Docker Compose is the tool we use for setting up our local development environments. To define how the containers work together, Docker Compose relies on a file called docker-compose.yml. This file specifies important details, such as which images are needed, which ports should be made available, whether the containers can access the host’s files, and what commands to execute when they begin running. In essence, it’s a blueprint that describes how everything should work together.

    Running docker-compose

    docker-compose up -d

    To see running container and their state with ports.

    docker-compose ps

    To check the logs of the docker-compose use the command:

    docker-compose logs

    To stop, pause, and unpause the running docker-compose:

    docker-compose stop
    docker-compose pause
    docker-compose unpause

    To remove the containers, networks, and volumes associated with this containerized environment, use this command:

    docker-compose down
  • Docker Introduction and Its Installation Steps

    What is Docker?

    Docker is a tool or we can say an open-source platform that helps build, deploy, run, and manage applications and helps package an application with all its dependencies together in a container. It is the most popular tool for containerization of your applications. It helps you to deliver your software quickly by separating your application from your infrastructure and running on a container.

    Docker provides you with a loosely isolated environment so that you can, package and run your application smoothly. You can run many containers simultaneously on a single host.

    How does Docker Work?

    Docker follows a client-server model in its architecture. At its core, there are two key components: the Docker client and the Docker daemon. These components collaborate to handle tasks such as container creation, execution, and distribution. They can be situated on the same machine, or you can configure a Docker client to interact remotely with a Docker daemon.

    Communication between the Docker client and daemon takes place via a REST API, utilizing either UNIX sockets for local communication or a network interface for remote interaction. In addition to the Docker client, there is another tool called Docker Compose, which facilitates the management of applications composed of multiple containers.

    • The Docker daemon is always on the lookout for instructions from the Docker API, which is like its phone line for receiving commands. It manages important things in the Docker world like images, containers, networks, and volumes. But here’s the cool part: The Docker daemon can talk with other daemons when needed.
    • The Docker client, which is commonly called “docker,” is like the main control panel for most people using Docker. When you tell it to do something, like running a container with the command “docker run,” it sends those instructions to the Docker daemon, which is like the worker behind the scenes. This Docker daemon then carries out the actual work. Think of the Docker client as the one who talks to you, and the Docker daemon as the one who does the heavy lifting. They communicate using something called the Docker API, which is like their language.
    • Think of a Docker registry as a big digital library where you can find special software packages called “Docker images.”

    Install Docker

    You can install docker on different platforms they all have different methods of installation

    • Linux
    • mac
    • window

    Install docker on Linux

    When it comes to installing Docker on a Linux system, you have two primary methods based on your distribution: the Debian method and the RPM method. These methods cater to Debian-based and Red Hat-based distributions, respectively.

    • Install docker for Debian

    Debian-based Linux distributions, such as Ubuntu, offer several methods to install Docker. Each method has its own characteristics, and the choice depends on factors like system requirements, preferences, and use cases. Here are some of the main methods for installing Docker on Debian-based systems.

    Install Docker from the apt repository

    sudo apt-get update 
    sudo apt-get install docker.io

    Install Docker from the Package

    1. Go to https://download.docker.com/linux/debian/dists/
    2. Select the Debian version in the list you want to download
    3. Download the following deb files for the Docker
    • containerd.io_<version>_<arch>.deb
    • docker-ce_<version>_<arch>.deb
    • docker-ce-cli_<version>_<arch>.deb
    • docker-buildx-plugin_<version>_<arch>.deb
    • docker-compose-plugin_<version>_<arch>.de
    1. install the .deb, Go to the location where you downloaded the Docker packages, and run the command.
    sudo dpkg -i ./containerd.io_<version>_<arch>.deb \
      ./docker-ce_<version>_<arch>.deb \
      ./docker-ce-cli_<version>_<arch>.deb \
      ./docker-buildx-plugin_<version>_<arch>.deb \
      ./docker-compose-plugin_<version>_<arch>.deb

    Docker Daemon will start automatically.

    To check that the docker daemon installs successfully, you can run the Hello-World image.

    sudo service docker start
    sudo docker run hello-world

    Install docker for RPM

    RPM-based Linux distributions, such as CentOS and Fedora, offer various methods to install Docker. These methods follow different use cases and user preferences. Here are some of the primary methods for installing Docker on RPM-based systems.

    Install docker using the rpm repository.

    Sudo yum update
    sudo yum install docker.x86_64

    Install docker from the package.

    Go to the Docker official repository for CentOS and other RPM-based distributions by navigating to the following URL in your web browser: https://download.docker.com/linux/.

    Choose the version of rpm that corresponds to your system. You’ll see directories like ‘7’ or ‘8’ for CentOS 7 or CentOS 8, and also for other rpm-based OS respectively. Click on the directory that matches your version.

    Click on the .rpm file for the Docker version you’ve chosen. Your web browser will typically prompt you to download the file. Save it to your local machine.

    Once you have downloaded the Docker .rpm file, you can proceed to install Docker on your system using the package manager, typically with a command, replacing <version> with the actual version you downloaded.

    sudo yum install docker-ce-<version>.rpm, 

    To check if the docker daemon is installed successfully, you can run the Hello-World image.

    sudo systemctl start docker
    sudo docker run hello-world

    Install Docker on Mac

    When it comes to installing Docker on macOS, you have two main methods: Interactive Installation and Command Line Interface (CLI) Installation. Each method serves a specific purpose and offers distinct advantages.

    Install Docker Interactively

    1. The first step is to download the Docker.dmg file, Locate the downloaded Docker.dmg file, and double-click it. This action opens the Docker installer, showing you its contents.
    2. You’ll see the Docker icon inside the installer window. To install Docker on your macOS system, simply drag this Docker icon and drop it into your Applications folder. This process effectively installs Docker on your computer.
    3. Now that Docker is installed, go to your Applications folder. Inside this folder, find the Docker.app icon. To start Docker, double-click on Docker.app. This action launches the Docker application.
    4. When Docker starts, you’ll notice the Docker menu, and you may encounter the Docker Subscription Service Agreement. This agreement outlines the terms and conditions for using Docker. Carefully review the agreement, click the “Accept” button to proceed.
    5. During the installation of Docker Desktop on macOS, you have the option to choose between two configuration modes: “Use recommended settings” or “Use advanced settings.” These settings allow you to customize the Docker Desktop according to your requirements.
    6. After you’ve made your configuration choices during the Docker Desktop installation on macOS, you’ll proceed to the final step.

    Install Docker from the command line

    To install Docker Desktop on macOS using terminal commands instead of the graphical interface, you can follow these commands. After downloading Docker.dmg run commands to install in Application Folder:

    sudo hdiutil attach Docker.dmg
    sudo /Volumes/Docker/Docker.app/Contents/MacOS/install
    sudo hdiutil detach /Volumes/Docker

    Install Docker on Windows

    • To install Docker on Windows, you can use Docker Desktop, which provides an easy way to run Docker containers on Windows. Docker Desktop includes the Docker Engine, Docker CLI, and Docker Compose. Here are the steps to install Docker on Windows:
    • Make sure you have a 64-bit version of Windows 10 or later with Hyper-V and Windows Subsystem for Linux (WSL) 2 support. These features are required to run Docker containers.

    Enable Hyper-v and WSL feature

    • To check if WSL or Hyper-v is enabled on your system, go to the search bar. Search for the turn window feature on and add off, check for Hyper- and WSL features if enabled, then move to the next step otherwise, enable Hyper-v and WSL features on your system, and restart your system.
    • Check whether the feature is installed or not. Go to the terminal and type WSL –version to check the version. It should be on version 2.

    Install Docker Interactively

    • Download and Install Docker Desktop:
    • Run the installer/.exe file.
    • Run the Docker installer and follow the instructions.
    • If asked to authorize the installer, click “Yes” or “Authorize.”     
    • Once it’s done, click “Close” to finish the installation.

    Install from Command Line

    1. After downloading “Docker Desktop Installer.exe,” open a terminal and run the following command in the terminal:

    If you’re using Terminal you should run the command:

    "Docker Desktop Installer.exe" install

    If you’re using PowerShell you should run the command:

    "Docker Desktop Installer.exe" install

    If using the Windows Command Prompt run the command:

    start /w "" "Docker Desktop Installer.exe" install

    How to run Docker Commands without using Sudo

    • The first step is to add a user to the docker group if by default docker group is not present on your system, create a group named docker
    $ sudo groupadd docker

    next step is to add a user to that group $USER exchange this with your username.

    $ sudo gpasswd -a $USER docker

    Restart the docker Daemon

    $ sudo service docker restart
    # if you are using docker on ubuntu then run the below command
    $ sudo service docker$ sudo service docker restart restart
  • How to integrate Puppeteer with Node on Ubuntu

    “Puppeteer” is a special tool made by the Chrome team that lets you use a web browser, like Chrome, without seeing it on your screen. You can tell it what to do using code. People use Puppeteer for things like copying information from websites, filling out online forms automatically, making PDF files, and lots of other tasks on the internet. It’s like having a robot that can browse the web for you.

    Update your system

    sudo apt-get update -y

    Install dependencies

    sudo apt install -y gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

    Puppeteer, even though it doesn’t have a graphical interface, relies on some libraries to work with the X11 server. One of these libraries is libxcb, which needs a distributed library called libX11-xcb.so.1. To fix this, you can install the libx11-xcb1 package on Debian-based systems.

    But sometimes, when you install a missing library, you might run into another missing library, and this can keep happening. That’s why it’s important to install a list of the required libraries to ensure Puppeteer runs smoothly.

    Install NodeJS using NVM

    $ wget https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh
    $ bash install.sh
    $ source ~/.bashrc
    $ nvm list-remote 
    $ nvm install v18
    $ nvm install node
    $ nvm use 18
    $ node --version

    Install puppeteer

    $ mkdir your-project
    $ mkdir -p /home/ubuntu/.cache/puppeteer
    $ chmod -R 700 /home/ubuntu/.cache/puppeteer 
    $ cd your-project 
    $ npm i
    $ npm install puppeteer
    $ cd /home/ubuntu/.cache/puppeteer
    $ npm i
    $ npm install puppeteer

    Install Google Chrome

    $ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
    $ sudo apt install ./google-chrome-stable_current_amd64.deb -y
    $ google-chrome --version

    To test if Puppeteer was installed successfully, you can create a JavaScript script named “test.js” and use the following code:

    const puppeteer = require('puppeteer');
    (async () => {
      const browser = await puppeteer.launch({
        args: ["--no-sandbox", "--disable-setuid-sandbox"]
      });
      const page = await browser.newPage();
      const url = 'https://google.com';
      await page.goto(url);
      await page.pdf({ path: 'page.pdf', format: 'A4' });
      await browser.close();
    })();

    Test it:

    node test.js

    Troubleshooting :

    Error 1 : Could not find Chrome (ver. xxx.xxx.xx).

    1. you did not perform an installation before running the script (e.g. `npm install`) or
    2. your cache path is incorrectly configured (which is: /home/ubuntu/.cache/puppeteer), please check chrome directory exist or not .
       $ ls /home/ubuntu/.cache/puppeteer
    3. Please verify whether Google Chrome is correctly installed 
       $ ls /usr/bin/google-chrome
        

    Error 2: Failed to launch the browser process! [1003/185100.101396:ERROR:zygote_host_impl_linux.cc(100)] Running as root without –no-sandbox is not supported.

    To fix the above (Running as root without --no-sandbox is not supported) error, use following steps:
    
    Edit the below file
    
    $ vim /usr/bin/google-chrome
    
    instead of this line
    
    exec -a "$0" "$HERE/chrome" "$@"
    
    use line
    
    exec -a "$0" "$HERE/chrome" "$@"  --no-sandbox

    Chrome Supported Version :

    Chrome  117.0.5938.149 - Puppeteer v21.3.7
    Chrome  117.0.5938.92 - Puppeteer v21.3.2
    Chrome  117.0.5938.62 - Puppeteer v21.3.0
    Chrome  116.0.5845.96 - Puppeteer v21.1.0
    Chrome  115.0.5790.170 - Puppeteer v21.0.2
    Chrome  115.0.5790.102 - Puppeteer v21.0.0
    Chrome  115.0.5790.98 - Puppeteer v20.9.0
    Chrome  114.0.5735.133 - Puppeteer v20.7.2
    Chrome  114.0.5735.90 - Puppeteer v20.6.0
    Chrome  113.0.5672.63 - Puppeteer v20.1.0
    Chrome  112.0.5615.121 - Puppeteer v20.0.0
  • AWS Load balancing with EC2

    Load Balancing

    Load balancing is a method of distributing traffic among all other running servers equally and helps to optimize application availability. Basically, a load balancer work as a traffic balancer sitting in front of your servers and distributing all client end request to each server equally. Modern applications have high-end traffic and millions of users who receive correct data, text, and information from every user. To handle this type of traffic request there are a number of severe running with duplicate data present in them so, a load balancer helps to distribute the traffic coming from the client to each running server and ensure that every resource is used equally.

    How Load Balancing Work?

    Load Balancing can be used in two different ways one is Hardware load balancing that are physically installed and maintained on-premises other one is Software load balancing which is software installed on a standard server or a virtual machine.

    Load Balancer basically helps to direct the incoming traffic and distribute it among the pool of servers according to the chosen algorithm.

    The load balancer also checks the health of back-end servers if one of the servers does not work load balancer will not send traffic to that server that is not able to fulfill the request.

    Benefits of load balancing

    • Scalability

    Load balancer helps you to predict the incoming traffic so it helps you to add or remove a server and also use it to direct application traffic to multiple servers.

    • Availability

    Load balancer helps in fault tolerance i.e. automatically detects the problem of the server and directs the traffic to the running server so that application doesn’t crash.

    • Performance

    Load balancer helps you to increase your application response time by reducing the network latency by redirecting the client request to the closest server present. Also, help by distributing traffic equally to each server to increase performance.

    • Security

    Load balancers include SSL encryption, MFA, and web application Firewalls which help your application’s security

    Load Balancing Algorithms.

    How a load balancer distributes the traffic across the server is defined by this algorithm these are the set of rules that a load balancer follows to distribute traffic. fall into two categories

    • Static load balancing
    • Dynamic load balancing

    Static load balancing

    Static load balancing algorithms are independent of the current server state and follow fixed rules some of the algorithms are below

    • Round-robin method

    forward the traffic to each server turn by turn One of the simplest methods to distribute traffic

    • Weighted round-robin method

    Similar to round-robin but different a numeric weight is defined for the server according to the performance of the server

    • IP Hash method

    Work on IP and Hask key, an algorithm that uses source IP and destination IP of client and Server to generate a hash key

    Dynamic Load balancing

    Dynamic load balancing examines the current state of servers before distribution traffic to the server’s algorithm is defined below

    • last connection methods

    Traffic is distributed to the least number of connection that is active at the time of client request received.

    • Weighted least connection method

    This maintains a list of the weighted application servers with their active number of connections. uses more computation times than the least connection algorithm. 

    • Least response time method

    It collects the response time of the call made to the server and on that information instance will be picked to receive the traffic.

    • Resource-based method

    This algorithm distributes the incoming traffic on the basis of load on the server.

    Types of load balancing

    • Application Load Balancing
    • Network Load balancing
    • Gateway Load Balancing
    • Classic Load Balnacing

    Application Load Balancing

    Application load balancing works with applications with HTTP and HTTPS traffic. It works on the Application layer of the OSI model. It enables a flexible feature set for your web application. This load-balancing technique distributes all incoming application traffic across multiple servers. This will help you to increase the availability of the application

    Network Load Balancing

    Network Load Balancer focuses on high-speed, low-latency traffic handling mainly focusing on TCP, TLS, and UPD protocols. It works on the network layers of the OSI model. when you need ultra-high performance, TLS offloading at scale, centralized certificate deployment, support for UDP, and static IP addresses for your application.

    How to configure the load balancer?

    • If you have an AWS account then log in to your account, if not then you have to create your AWS account to access the services of AWS. You can create your AWS account by using the link given below.( https://portal.aws.amazon.com/)
    • In your EC2 dashboard on the left side scroll down and you will find an option for Load Balancing.
    • The first step is to create two EC2 instances with different availability zones that contain your application.

    Note

    If you want to check your load balancing is working you need to apply changes on your application which means both instances contain the same application with different outputs so that you can further identify which server is running.

    Creating Target Groups

    • Before creating a load balancer you have to create a Target group that contains the instance which you want to target. This target group helps you to route requests to one or more registered targets.
    • Click on the Target group under the load balancing option you will find an option to create target groups.
    • In target groups, you will find the basic configuration in which you have to select the target i.e. instance, IP address, Lambda function, and Application load balancer.
    • Select an option on which you want to create a load balancer in our case we are using instances.
    • In the next step you need to give a name to your target group after that select a protocol and port number and also select an IPv4/IPv6 address, and also select the VPC in which you have created your instances.
    • The next step is to select the protocol for your target group, and below that you will find the health check option for your protocol you can also configure additional settings for your health check.
    • If you want to give tags to your target group then click on the tags option and then click on the Next button.
    • In the next step, you have to select your targets. Select those targets on which you want to distribute your targets, and click on include as pending below option.
    • Just below in review targets, you will see your target select and then click on create target groups.
    • Go to the target group dashboard you will see your target created check the health of your target groups if it shows healthy go to the next step.

    Creating Load Balancer

    • On the left side of the EC2 dashboard, you will find an option for load balancing click on that. 
    • You will find an option to create load balancing by clicking on the create load balancer option.
    • 3 options are provided to you for creating load-balancing we will create by using application load balancing click on create below application load balancing. 
    • Give a name to your load balancer just below that you have two options one for request from the internet and the other one from the internal network.
    • The internet facing requires a public subnet and route request from the client over the internet to the target.
    • The Internal needs a private subnet and route request internal.
    • Choose IPv4 if you want to use IPv6 you have an option of dual stack.
    • The next step is internet mapping in which you need to select VPC and subnets in which you have created your instances.
    • A load balancer will only work if you have a minimum of 2 instances with different Availability zone . 
    • The next step is to select a security group for your load balancer you can use the existing one or create new one. 
    • After that add a listener and routings, basically listener will help you to check the connection request using the port and protocol you configure.
    • And select the Target group that you created befor.
    • If you want to add a tag to your load balancer you can add it and then check the summary and click create load balancer.
    • After creating you will see your created load balancer on the dashboard. if you want to check that your balancer is working or not, Copy DNS name of your load balancer just shown below .
    • After copying DNS name open Google tan paste DNS and search will see your website after you refresh for some time you will see it will change as you define in your ec2 instance.
  • Auto Scaling Groups

    Auto-scaling is a scaling method to scale your ec2 instance up or down according to the workload of an application. you create a collection of instances that come under Auto-scaling groups. So basically Auto scaling group is defined as a group where you can define the minimum number of instances and maximum number of instances and more. Like you have an application that runs on EC2 and suddenly your traffic increases on your application and you want another server to run your application in that case auto-scaling helps you to automatically scale up your instances if the workload increases and scale down when not needed.

    The desired number of instance

    Auto Scaling Components

    • Groups

    your instances are managed into groups so that they can be treated as a single unit so that they can be scaled or managed. When you create a group you have to define a minimum, maximum, and desired number of instances

    • Templates

    your group uses a launch. template or configure templates for running your ec2 instances. you can specify your EC2 instance configuration.

    Types of Auto Scaling

    Basically auto-scaling is of two type

    1. Horizontal Auto Scaling
    2. Vertical Auto Scaling
    • Horizontal Autoscaling

    A type of auto-scaling in which you can increase your work efficiency by adding more instances of the same configuration. If the workload increases on your instance then another instance goes up which you define in your group and handles the load with your instance

    • Vertical Auto Scaling

    AWS also provides you an option for scaling your instances vertically i.e. you can increase your instance configuration. For example, if you are using instances with 2GB and 2 core configurations and need more then you can also increase your configuration under this option.

    Get started with AutoScaling

    • If you have an AWS account then log in to your account, if not then you have to create your AWS account to access the services of AWS. You can create your AWS account by using the link given below.( https://portal.aws.amazon.com/)
    • In your AWS account on the top left search for EC2.
    • In your EC2 dashboard on the left side scroll down and you will find an option for Auto Scaling And Auto Scaling Group
    • Click on the Auto Scaling group option and you have to create a group in which you will define your configuration of scaling an instance.
    • click on Create Autoscaling group and you will see a page to create a group.
    • Before creating a group you have to create a template in which you will define your instance configuration. you will also create a template from the Launch template option by clicking on Creating a Launch template.
    • Just like creating an ec2 instance, you have to create your launch template with the required configuration.
    • After creating the template, Select your created template and click next.
    • After clicking on next choose your network in which VPC and which subnet you want to run your instance.
    • You will find an option of instance type requirements in which if you want to override your instance template you can choose to override the launch template by specifying different instance attributes or manually adding instance types. Then click on next.

    In the next step, you will find an option for load balancing if you want to load balance your application you can choose one of the options provided. you can attach or create a load balancer.

    • Below the load balancer, you will find VPC Lattice integration options that help you improve networking capabilities and scalability, and integrate your Auto Scaling group with VPC Lattice. VPC Lattice facilitates communications between AWS services and enables you to connect and manage your applications across compute services in AWS. you will proceed with no VPC Lattice integration option or attach a VPC Lattice integration option you created.
    • you will find an option for a health check and how much time the process is repeated again last additional option for monitoring and then click Next.
    • In the next step, you will find an option for Group size in which you can specify the size of the Auto Scaling group by changing the desired capacity. You can also specify minimum and maximum capacity limits. Your desired capacity must be within the limited range
    • Next, you will find an option for scaling policy in which you will dynamically resize your Auto Scaling group to meet changes in demand.
    • Instance scale-in protection helps you to protect your newly launched instance from scale-in by default. Click on next for next step.
    • If you want to send notifications to SNS whenever your instance auto-scaling you can configure it in the notification section. click next for further steps.
    • If you want to add a tag to your group then add and click next. you will see a page for reviewing your configuration scroll down and click on Create auto-scaling group.
    • In the auto-scaling group dashboard you will see your group select that group go to instances management and you will see that your instance is ready.
    • If you go to your ec2 instance dashboard you will see that your instance is created if you delete your terminate your instance it will create a new instance automatically this is because you have defined a desire number of instance that run so it helps you to run desire number of instance.
  • How to Access Private Instance by using Bastion Host

    Introduction

    You may have different services that you don’t want to access from the internet, in that case, you use a private instance running on private subnets. Basically, these are the backend servers that don’t accept incoming traffic from the internet. In terms of security, this plays an important role in protecting your Private servers. Your back-end will not connect to the internet if you want to access it from your local machine you need a Bastion Host.

    Bastion Host is a server that is exposed to a public network and helps you connect with private instances/ databases. So you need to ssh to the Bastion Host to access your instance.

    Key Terminology

    To perform this task you need to know some basic key terminologies

    • Create a VPC first with a public subnet and a private subnet with its security group configuration.
    • You need to create your private instance/Database in the private subnet and Bastion Host in the public subnet. If you want to learn how to create VPC you can check our other blog – Introduction to VPC.

    Let’s set up an instance in a private subnet

    • Go to the EC2 dashboard and click on Create instance.
    • Name your instance and select the OS according to your requirements.
    • Select the instance type for you and after that select a key pair.
    • If you have the key pair then use it otherwise create a new key pair. 
    • Now it’s an important step to perform in this step you need to configure your instance in a private subnet click on the edit option in the network setting as shown below:
    • Select the VPC in which you want to create your instance and just below it select the private subnet in which you want to create your instance.
    • Set auto-assign public IP to disable if you enable it will automatically assign public IP to your instance.
    • You can also use your existing security groups otherwise create one
    • Set inbound security group rules to SSH so that you can connect to your instance through SSH.
    • Finally, configure your storage and click Launch instance.

    Let’s create a Bastion Host

    • Again launch an EC2 instance, and follow the same for creating the instance.
    • While configuring the network setting you need to add the same VPC in which you created your private instance, and you need to select the public subnet.
    • Enable Auto-assign public IP to assign a public IP to your instance so that it can connect with the internet.
    • In the security group section create a security group for Bastion Host.
    • In the inbound rule add a rule for HTTP and HTTPS for internet traffic.
    • Finally, configure storage and launch your instance.
    • Go to the instances dashboard you will see your instance and bastion host.

    Access private instances through Bastion Host

    • If you want to access your private instance firstly you need to access the Bastion Host

    Accessing Bastion host

    • Open the terminal of your local machine
    • Connect to Bastion Host by ssh command

    ssh -i “prproject.pem” [email protected]

    Access private instance

    • Now you need to copy the key file to your Bastion Host.
    • After that run the SSH command on Bastion Host to access the private instance

    ssh -i “pproject.pem” [email protected]