Category: Automation & Tools

  • Introducing Docker Offload: Run Docker Builds & Containers in the Cloud with Ease

    Working with resource-heavy Docker builds or containers that push the limits of your local machine? Docker Offload makes it easy to move that work to the cloud—without changing your development workflow.

    Docker Offload is a fully managed service that lets you run Docker builds and containers in a remote, cloud-based environment while still using Docker as you normally would on your local machine. It’s ideal for tasks that demand high performance—such as running LLMs, machine learning pipelines, or GPU-accelerated applications.

    Why Choose Docker Offload?

    Today’s developers often juggle local development with the need for scalable infrastructure. Docker Offload bridges that gap by offering:

    • Cloud-based resources to handle large or complex builds

    • Faster build times and quicker development feedback loops

    • GPU support for compute-heavy workloads

    • Docker Compose compatibility for managing multi-service applications in the cloud

    Whether you’re running on a lightweight laptop or just want to speed things up, Docker Offload brings scalable power to your workflow.

    Great use cases include:

    • Machine learning model training or inference

    • Running large language models (LLMs)

    • Heavy-duty CI/CD pipelines

    • Resource-intensive microservices and cloud-native applications


    Getting Started with Docker Offload

    Step 1: Sign Up and Subscribe

    To begin using Docker Offload, you’ll need a Docker account and an active subscription that includes access to the service.

    Step 2: Enable Docker Offload

    1. Open Docker Desktop and sign in to your Docker account.

    2. Launch your terminal and run:

    docker offload start
    1. Choose the Docker account that will be used for Offload.

    2. If prompted, decide whether to enable GPU support. Enabling this option runs your containers on an NVIDIA L4 GPU—ideal for AI or ML workloads.

    Note: GPU usage will increase your consumption of Docker credits.


    Step 3: Run a Container in the Cloud

    Once Docker Offload is running, your local Docker CLI will communicate with a secure cloud environment behind the scenes. You use it just like your local Docker engine.

    To test it out, try running:

    docker run --rm hello-world
    

    If GPU support is enabled, you can test that too:

    docker run --rm --gpus all hello-world
    

    If Docker Offload is working correctly, you’ll see the familiar “Hello from Docker!” message.


    Step 4: Stop Docker Offload

    To switch back to local builds and containers, simply stop the Offload service:

    docker offload stop
    

    You can restart Offload at any time using the same start command.


    Performance Tips for Faster Builds

    Because Docker Offload runs your builds remotely, files need to be uploaded to the cloud. This means that transfer speeds and latency can affect build times, especially with larger projects.

    Docker includes several features to minimize delays:

    • Fast access to build caches via attached volumes

    • Efficient syncing that only uploads layers that have changed

    • Optimized layer pulling when transferring results back to your machine

    To make the most of Docker Offload, consider these best practices:

    • Use a .dockerignore file to skip unnecessary files

    • Start with slim base images to reduce image size

    • Use multi-stage builds to optimize output

    • Download external files during the build process instead of including them locally

    • Take advantage of parallel build tools to speed things up


    Build Smarter, Run Faster

    Docker Offload gives you the flexibility to use cloud resources only when you need them—without changing how you work. Whether you’re building containers, running GPU workloads, or managing complex Docker Compose apps, Offload lets you scale your environment without overloading your hardware.

    To get started, just run:

    docker offload start
    

    No infrastructure setup. No workflow changes. Just more power when you need it.

  • Exploring Docker Architecture and Command Usage

    Component of docker architecture

    Docker uses a client-server architecture, which allows you to interact with Docker through a command-line interface (CLI) or a graphical user interface (GUI) while the Docker daemon (server) does the actual work of managing containers. Here’s an overview of the Docker architecture:

    Docker Client

    This is the interface that a user interacts with. It’s usually a command-line tool (CLI) or a GUI. When you issue a command like “run a container,” the Docker Client translates it into a request to the Docker Daemon.

    Docker Daemon

    The Docker Daemon is a background process running on your system. It’s responsible for the heavy lifting of container management. It listens for API requests from the Docker Client and acts on them. It can create, run, and manage containers, as well as handle tasks like image management, containerizing, data volumes, and network configuration.

    Docker Registry

    Docker images, which are important snapshots of applications and their dependencies, are stored in Docker Registries. These are like online repositories where you can upload, share, and download container images. Docker Hub is a popular public registry, but you can also have private ones for your organization.

    Default Registry: Docker comes with a default registry called the Docker Hub. It’s like a public library that contains a vast collection of images for various programming languages and platforms. When you request an image in Docker, it first checks the Docker Hub to see if it’s available there. This is like going to a public library to find a book.

    Private Registry: Just as you can have your personal book collection at home, you can also have your private Docker registry. This is like having your own library where you store images that are specific to your needs, like custom applications or configurations. You can configure

    commands:
    to log to the docker hub,

    docker login
    Username:
    Password:

    Docker Images

    Images in Docker serve as templates or snapshots of a file system with a specific set of files and configurations.

    Application Dependencies: Images specify all the libraries, binaries, and resources required for an application to run. This includes everything from the base operating system to any additional software packages or libraries that an application needs to run.

    Launch Processes: Images define the initial command or process that should be executed when a container is started. This command often kicks off the main application or service within the container.

    Commands

    To see all the images present in your local machine run

    docker images

    To find out images in Docker Hub.

    docker search <image_name>

    To download/pull Images from Docker Hub to your local machine.

    docker pull <image_name>

    To push a Docker image to a registry i.e. Docker Hub

    docker push <image_name>:<tag>

    To add or change tags for a Docker image

    docker tag <source_image>[:<source_tag>] <target_image>[:<target_tag>]
    • <source_image>- image id of that image which you want to tag.
    • :<source_tag>- If the source image has a tag, you can specify it here. If not leave it default to latest. used when you want to change the tag.
    • <target_image>This is the new name you want to give to your image
    • <target_tag>-tag you want to give to your image

    To get detailed information about a Docker image, including its layers and metadata.

    docker image inspect <image_name/id>:<tag>

    To remove a Docker Image you can use.

    docker rmi <image_name_or_id>

    Docker Build

    Docker Build is a core feature of Docker Engine used extensively in software development. At its core, it’s a command that facilitates the creation of Docker images. This process involves taking your application’s source code, along with necessary dependencies and configuration, and packaging it into a standalone, runnable image.

    Docker begins with a Dockerfile.

    Docker File

    A Dockerfile is like a set of instructions written in plain text. It tells a computer what to do step by step to create a special kind of package called an “image.” This image can contain everything an application needs to run.

    So, with a Dockerfile, you can automate the process of building this package. It’s like giving your computer a recipe to follow. It reads the Dockerfile and carries out each instruction to create the image you want. This makes it easier to create and manage these packages for your applications.

    Here are the most common types of instructions used to build a Docker File,

    FROM base_image:tag  # A base image use to create the container 
    
    MAINTAINER="Your Name"  # author/owner name 
    
    WORKDIR /app   # Set the working directory inside the container where instruction will be executed
    
    COPY source_path destination_path   # Copy files from the host into the container
    
    RUN apt-get update && apt-get install -y package1 package2   # Executes a command during the image build process. Commonly used for installing packages and dependencies.
    
    EXPOSE port_number   # Expose ports for the container (for runtime). Informs Docker that the container will listen on the specified port at runtime. It doesn't actually publish the port.
    
    ENV VAR_NAME=value   # Sets environment variables inside the container.
    
    CMD ["executable", "arg1", "arg2"]  # It defines the default command to run when a container is started. It can be overridden at runtime.only one CMD per Dockerfile.
    
    VOLUME ["/data"]  # Define a volume for data storage, it helps in creating a mount point and marks it for storing persistent data.
    
    SHELL ["/bin/bash", "-c"]   # Execute a shell command during build..
    
    ENTRYPOINT ["executable", "arg1", "arg2"]   # Set the entry point to a script. The ENTRYPOINT instruction specifies the command that should be run when the container starts. It is often used for specifying the main executable or entry point of your application.

    Building images on your local machine

    • let’s create a docker file to run Hello World
    • the first step is to create a docker file in a particular folder named Dockerfie
    • Edit the docker file and write a code to build an image
    FROM ubuntu
    MAINTAINER <owner_name>
    RUN apt-get update
    CMD ["echo","Hello World"]

    build your image from the docker file using the below command

    #when you are present in tne same directory where dockerfile present 
    docker build <image_name>:<tag> .
    #when you are in different directory yauneed to specify thepath of your dockerfile
    docker build /path/to/dockerfile/ <image_name>:<tag> 

    Use the docker images command to see the image that you build. you will find your image with the name that you give and a unique Image ID.

    Containers

    In Docker, containers are lightweight, self-contained packages for running software. They include everything an application needs, such as code, libraries, and settings. Act as an executable instance of Docker images.

    Images: Docker images are like blueprints that contain all the instructions for creating a container. They include the application code, dependencies, and configuration.

    Containers: Containers are the actual running instances of Docker images. When you start a Docker container, you’re essentially using an image to create a live environment to run your application. Each container is isolated from others, so they don’t interfere with each other.

    Isolation: Docker containers are isolated from each other, much like how objects in code are encapsulated and don’t affect each other’s data.

    Commands

    To show all the docker processes actively running containers “ps stands for process status”,

    docker ps

    To see the all containers running in the background

    docker ps -a

    This command is used to run a container with the help of an image.

    docker run --name <container_name> <image_id/name> 

    This command is used to view detailed information about a container.

    docker inspect <container_name/id>

    Start containers from an image
    –name defines the name of the containers
    -d define container to run in detach mode i.e. running in the background
    -it defines the continuous running of the containers

    docker run --name <container_name> -it -d <imag_ID/Name>

    Start, Stop, and restart Containers:

    docker start <container_ID/Name>
    docker stop <container_ID/Name>
    docker restart <container_ID>

    Runs a command inside a running container.
    To go inside the container.

    docker exec -it <container_ID> <command>
    docker attach <container_ID>

    To view details of containers

    docker inspect <container_ID>

    To remove/delete container.
    -f for forcefully removing the container
    Removes all stopped containers, freeing up disk space

    docker rm <container_ID>
    docker container prune

    to display the logs of the container,

    docker logs <container_ID>

    Copies files from your local machine to a container, or vice versa.

    docker cp <local_path> <container_ID>:<container_path>

    To create a volume to store data, volume is basically used to store the data outside the container
    -v used to specify a volume to be mounted to the container

    docker run -v <volume_name>:<container_path> <image>

    Shows real-time resource usage statistics of a container.

    docker stats <container_ID>

    Registries

    A registry is a library or a storage place for images, which are the building blocks of containers.

    Default Registry: Docker comes with a default registry called the Docker Hub. It’s like a public library that contains a vast collection of images for various programming languages and platforms. When you request an image in Docker, it first checks the Docker Hub to see if it’s available there. This is like going to a public library to find a book.

    Private Registry: Just as you can have your personal book collection at home, you can also have your private Docker registry. This is like having your own library where you store images that are specific to your needs, like custom applications or configurations. You can configure

    Docker engine

    The Docker Engine is like the heart of Docker, where everything happens. It’s the software that makes containers work on your computer.

    Docker Daemon: It runs on your computer and does all the heavy lifting. It’s responsible for creating, managing, and running containers.

    Docker Client: It’s a command-line tool you use to tell the Docker Daemon what to do. You give it special commands, and it communicates with the Daemon to make things happen.

    Docker compose

    Docker Compose is a valuable tool for defining and running multi-container Docker applications. It simplifies the process of configuring and managing complex applications by utilizing a YAML file to specify the setup of your application’s various services. This configuration file acts as a blueprint, and with a single command, you can create and launch all the services according to the defined configuration.

    The significant benefit of using Docker Compose is that you can specify your application’s entire setup in a single file, which you keep at the root of your project’s repository. This file can be version-controlled, making it easy for others to collaborate on your project. All someone needs to do is clone your repository and start the application using Docker Compose. As a result, you may find many projects on platforms like GitHub and GitLab adopting this approach.

    command to install docker on Ubuntu

    sudo apt install docker-compose

    To check the version of the docker

    docker-compose --version

    Docker-compose.yml File

    Docker Compose is the tool we use for setting up our local development environments. To define how the containers work together, Docker Compose relies on a file called docker-compose.yml. This file specifies important details, such as which images are needed, which ports should be made available, whether the containers can access the host’s files, and what commands to execute when they begin running. In essence, it’s a blueprint that describes how everything should work together.

    Running docker-compose

    docker-compose up -d

    To see running container and their state with ports.

    docker-compose ps

    To check the logs of the docker-compose use the command:

    docker-compose logs

    To stop, pause, and unpause the running docker-compose:

    docker-compose stop
    docker-compose pause
    docker-compose unpause

    To remove the containers, networks, and volumes associated with this containerized environment, use this command:

    docker-compose down
  • Docker Introduction and Its Installation Steps

    What is Docker?

    Docker is a tool or we can say an open-source platform that helps build, deploy, run, and manage applications and helps package an application with all its dependencies together in a container. It is the most popular tool for containerization of your applications. It helps you to deliver your software quickly by separating your application from your infrastructure and running on a container.

    Docker provides you with a loosely isolated environment so that you can, package and run your application smoothly. You can run many containers simultaneously on a single host.

    How does Docker Work?

    Docker follows a client-server model in its architecture. At its core, there are two key components: the Docker client and the Docker daemon. These components collaborate to handle tasks such as container creation, execution, and distribution. They can be situated on the same machine, or you can configure a Docker client to interact remotely with a Docker daemon.

    Communication between the Docker client and daemon takes place via a REST API, utilizing either UNIX sockets for local communication or a network interface for remote interaction. In addition to the Docker client, there is another tool called Docker Compose, which facilitates the management of applications composed of multiple containers.

    • The Docker daemon is always on the lookout for instructions from the Docker API, which is like its phone line for receiving commands. It manages important things in the Docker world like images, containers, networks, and volumes. But here’s the cool part: The Docker daemon can talk with other daemons when needed.
    • The Docker client, which is commonly called “docker,” is like the main control panel for most people using Docker. When you tell it to do something, like running a container with the command “docker run,” it sends those instructions to the Docker daemon, which is like the worker behind the scenes. This Docker daemon then carries out the actual work. Think of the Docker client as the one who talks to you, and the Docker daemon as the one who does the heavy lifting. They communicate using something called the Docker API, which is like their language.
    • Think of a Docker registry as a big digital library where you can find special software packages called “Docker images.”

    Install Docker

    You can install docker on different platforms they all have different methods of installation

    • Linux
    • mac
    • window

    Install docker on Linux

    When it comes to installing Docker on a Linux system, you have two primary methods based on your distribution: the Debian method and the RPM method. These methods cater to Debian-based and Red Hat-based distributions, respectively.

    • Install docker for Debian

    Debian-based Linux distributions, such as Ubuntu, offer several methods to install Docker. Each method has its own characteristics, and the choice depends on factors like system requirements, preferences, and use cases. Here are some of the main methods for installing Docker on Debian-based systems.

    Install Docker from the apt repository

    sudo apt-get update 
    sudo apt-get install docker.io

    Install Docker from the Package

    1. Go to https://download.docker.com/linux/debian/dists/
    2. Select the Debian version in the list you want to download
    3. Download the following deb files for the Docker
    • containerd.io_<version>_<arch>.deb
    • docker-ce_<version>_<arch>.deb
    • docker-ce-cli_<version>_<arch>.deb
    • docker-buildx-plugin_<version>_<arch>.deb
    • docker-compose-plugin_<version>_<arch>.de
    1. install the .deb, Go to the location where you downloaded the Docker packages, and run the command.
    sudo dpkg -i ./containerd.io_<version>_<arch>.deb \
      ./docker-ce_<version>_<arch>.deb \
      ./docker-ce-cli_<version>_<arch>.deb \
      ./docker-buildx-plugin_<version>_<arch>.deb \
      ./docker-compose-plugin_<version>_<arch>.deb

    Docker Daemon will start automatically.

    To check that the docker daemon installs successfully, you can run the Hello-World image.

    sudo service docker start
    sudo docker run hello-world

    Install docker for RPM

    RPM-based Linux distributions, such as CentOS and Fedora, offer various methods to install Docker. These methods follow different use cases and user preferences. Here are some of the primary methods for installing Docker on RPM-based systems.

    Install docker using the rpm repository.

    Sudo yum update
    sudo yum install docker.x86_64

    Install docker from the package.

    Go to the Docker official repository for CentOS and other RPM-based distributions by navigating to the following URL in your web browser: https://download.docker.com/linux/.

    Choose the version of rpm that corresponds to your system. You’ll see directories like ‘7’ or ‘8’ for CentOS 7 or CentOS 8, and also for other rpm-based OS respectively. Click on the directory that matches your version.

    Click on the .rpm file for the Docker version you’ve chosen. Your web browser will typically prompt you to download the file. Save it to your local machine.

    Once you have downloaded the Docker .rpm file, you can proceed to install Docker on your system using the package manager, typically with a command, replacing <version> with the actual version you downloaded.

    sudo yum install docker-ce-<version>.rpm, 

    To check if the docker daemon is installed successfully, you can run the Hello-World image.

    sudo systemctl start docker
    sudo docker run hello-world

    Install Docker on Mac

    When it comes to installing Docker on macOS, you have two main methods: Interactive Installation and Command Line Interface (CLI) Installation. Each method serves a specific purpose and offers distinct advantages.

    Install Docker Interactively

    1. The first step is to download the Docker.dmg file, Locate the downloaded Docker.dmg file, and double-click it. This action opens the Docker installer, showing you its contents.
    2. You’ll see the Docker icon inside the installer window. To install Docker on your macOS system, simply drag this Docker icon and drop it into your Applications folder. This process effectively installs Docker on your computer.
    3. Now that Docker is installed, go to your Applications folder. Inside this folder, find the Docker.app icon. To start Docker, double-click on Docker.app. This action launches the Docker application.
    4. When Docker starts, you’ll notice the Docker menu, and you may encounter the Docker Subscription Service Agreement. This agreement outlines the terms and conditions for using Docker. Carefully review the agreement, click the “Accept” button to proceed.
    5. During the installation of Docker Desktop on macOS, you have the option to choose between two configuration modes: “Use recommended settings” or “Use advanced settings.” These settings allow you to customize the Docker Desktop according to your requirements.
    6. After you’ve made your configuration choices during the Docker Desktop installation on macOS, you’ll proceed to the final step.

    Install Docker from the command line

    To install Docker Desktop on macOS using terminal commands instead of the graphical interface, you can follow these commands. After downloading Docker.dmg run commands to install in Application Folder:

    sudo hdiutil attach Docker.dmg
    sudo /Volumes/Docker/Docker.app/Contents/MacOS/install
    sudo hdiutil detach /Volumes/Docker

    Install Docker on Windows

    • To install Docker on Windows, you can use Docker Desktop, which provides an easy way to run Docker containers on Windows. Docker Desktop includes the Docker Engine, Docker CLI, and Docker Compose. Here are the steps to install Docker on Windows:
    • Make sure you have a 64-bit version of Windows 10 or later with Hyper-V and Windows Subsystem for Linux (WSL) 2 support. These features are required to run Docker containers.

    Enable Hyper-v and WSL feature

    • To check if WSL or Hyper-v is enabled on your system, go to the search bar. Search for the turn window feature on and add off, check for Hyper- and WSL features if enabled, then move to the next step otherwise, enable Hyper-v and WSL features on your system, and restart your system.
    • Check whether the feature is installed or not. Go to the terminal and type WSL –version to check the version. It should be on version 2.

    Install Docker Interactively

    • Download and Install Docker Desktop:
    • Run the installer/.exe file.
    • Run the Docker installer and follow the instructions.
    • If asked to authorize the installer, click “Yes” or “Authorize.”     
    • Once it’s done, click “Close” to finish the installation.

    Install from Command Line

    1. After downloading “Docker Desktop Installer.exe,” open a terminal and run the following command in the terminal:

    If you’re using Terminal you should run the command:

    "Docker Desktop Installer.exe" install

    If you’re using PowerShell you should run the command:

    "Docker Desktop Installer.exe" install

    If using the Windows Command Prompt run the command:

    start /w "" "Docker Desktop Installer.exe" install

    How to run Docker Commands without using Sudo

    • The first step is to add a user to the docker group if by default docker group is not present on your system, create a group named docker
    $ sudo groupadd docker

    next step is to add a user to that group $USER exchange this with your username.

    $ sudo gpasswd -a $USER docker

    Restart the docker Daemon

    $ sudo service docker restart
    # if you are using docker on ubuntu then run the below command
    $ sudo service docker$ sudo service docker restart restart
  • How to integrate Puppeteer with Node on Ubuntu

    “Puppeteer” is a special tool made by the Chrome team that lets you use a web browser, like Chrome, without seeing it on your screen. You can tell it what to do using code. People use Puppeteer for things like copying information from websites, filling out online forms automatically, making PDF files, and lots of other tasks on the internet. It’s like having a robot that can browse the web for you.

    Update your system

    sudo apt-get update -y

    Install dependencies

    sudo apt install -y gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

    Puppeteer, even though it doesn’t have a graphical interface, relies on some libraries to work with the X11 server. One of these libraries is libxcb, which needs a distributed library called libX11-xcb.so.1. To fix this, you can install the libx11-xcb1 package on Debian-based systems.

    But sometimes, when you install a missing library, you might run into another missing library, and this can keep happening. That’s why it’s important to install a list of the required libraries to ensure Puppeteer runs smoothly.

    Install NodeJS using NVM

    $ wget https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh
    $ bash install.sh
    $ source ~/.bashrc
    $ nvm list-remote 
    $ nvm install v18
    $ nvm install node
    $ nvm use 18
    $ node --version

    Install puppeteer

    $ mkdir your-project
    $ mkdir -p /home/ubuntu/.cache/puppeteer
    $ chmod -R 700 /home/ubuntu/.cache/puppeteer 
    $ cd your-project 
    $ npm i
    $ npm install puppeteer
    $ cd /home/ubuntu/.cache/puppeteer
    $ npm i
    $ npm install puppeteer

    Install Google Chrome

    $ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
    $ sudo apt install ./google-chrome-stable_current_amd64.deb -y
    $ google-chrome --version

    To test if Puppeteer was installed successfully, you can create a JavaScript script named “test.js” and use the following code:

    const puppeteer = require('puppeteer');
    (async () => {
      const browser = await puppeteer.launch({
        args: ["--no-sandbox", "--disable-setuid-sandbox"]
      });
      const page = await browser.newPage();
      const url = 'https://google.com';
      await page.goto(url);
      await page.pdf({ path: 'page.pdf', format: 'A4' });
      await browser.close();
    })();

    Test it:

    node test.js

    Troubleshooting :

    Error 1 : Could not find Chrome (ver. xxx.xxx.xx).

    1. you did not perform an installation before running the script (e.g. `npm install`) or
    2. your cache path is incorrectly configured (which is: /home/ubuntu/.cache/puppeteer), please check chrome directory exist or not .
       $ ls /home/ubuntu/.cache/puppeteer
    3. Please verify whether Google Chrome is correctly installed 
       $ ls /usr/bin/google-chrome
        

    Error 2: Failed to launch the browser process! [1003/185100.101396:ERROR:zygote_host_impl_linux.cc(100)] Running as root without –no-sandbox is not supported.

    To fix the above (Running as root without --no-sandbox is not supported) error, use following steps:
    
    Edit the below file
    
    $ vim /usr/bin/google-chrome
    
    instead of this line
    
    exec -a "$0" "$HERE/chrome" "$@"
    
    use line
    
    exec -a "$0" "$HERE/chrome" "$@"  --no-sandbox

    Chrome Supported Version :

    Chrome  117.0.5938.149 - Puppeteer v21.3.7
    Chrome  117.0.5938.92 - Puppeteer v21.3.2
    Chrome  117.0.5938.62 - Puppeteer v21.3.0
    Chrome  116.0.5845.96 - Puppeteer v21.1.0
    Chrome  115.0.5790.170 - Puppeteer v21.0.2
    Chrome  115.0.5790.102 - Puppeteer v21.0.0
    Chrome  115.0.5790.98 - Puppeteer v20.9.0
    Chrome  114.0.5735.133 - Puppeteer v20.7.2
    Chrome  114.0.5735.90 - Puppeteer v20.6.0
    Chrome  113.0.5672.63 - Puppeteer v20.1.0
    Chrome  112.0.5615.121 - Puppeteer v20.0.0