<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[GRAVIX-DEVOPS]]></title><description><![CDATA[GRAVIX-DEVOPS]]></description><link>https://gurjar-vishal.me</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 11:13:22 GMT</lastBuildDate><atom:link href="https://gurjar-vishal.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How to Deploy a 3-Tier Web App Architecture on AWS with VPC]]></title><description><![CDATA[In this project, I designed and set up a 3-Tier Web Application architecture within a custom Virtual Private Cloud (VPC) using AWS services. This setup follows industry best practices for security, scalability, and separation of concerns.

🔒 Secure ...]]></description><link>https://gurjar-vishal.me/how-to-deploy-a-3-tier-web-app-architecture-on-aws-with-vpc</link><guid isPermaLink="true">https://gurjar-vishal.me/how-to-deploy-a-3-tier-web-app-architecture-on-aws-with-vpc</guid><category><![CDATA[AWS]]></category><category><![CDATA[3-tier-architecture]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Sat, 06 Sep 2025 04:09:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757131621521/139e52b6-2bc2-4925-b9f9-2e576441ca3b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this project, I designed and set up a <strong>3-Tier Web Application</strong> architecture within a custom Virtual Private Cloud (VPC) using AWS services. This setup follows industry best practices for security, scalability, and separation of concerns.</p>
<blockquote>
<p><strong><em>🔒 Secure | ⚙️ Modular | ☁️ AWS-Powered</em></strong></p>
</blockquote>
<p>In this blog, I’ll show you how I deployed a <strong>3-tier application</strong> on AWS using a <strong>custom VPC</strong>. This architecture includes:</p>
<ul>
<li><p><strong>Nginx</strong> as a Reverse Proxy (Web Layer)</p>
</li>
<li><p><strong>Apache Tomcat</strong> as the Application Server</p>
</li>
<li><p><strong>MySQL</strong> as the Database Server</p>
</li>
</ul>
<h2 id="heading-what-is-3-tier-architecture">🧱 What is 3-Tier Architecture?</h2>
<p>A 3-tier architecture separates the app into:</p>
<ol>
<li><p><strong>Web Tier</strong> (Nginx) – Handles incoming HTTP requests</p>
</li>
<li><p><strong>App Tier</strong> (Tomcat) – Runs backend application logic</p>
</li>
<li><p><strong>DB Tier</strong> (MySQL) – Stores application data</p>
</li>
</ol>
<h2 id="heading-tech-stack-amp-aws-services-used">🔧 Tech Stack &amp; AWS Services Used</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Layer</strong></td><td><strong>Component</strong></td><td><strong>AWS Service</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Web</td><td>Nginx</td><td>EC2 in Public Subnet</td></tr>
<tr>
<td>App</td><td>Tomcat</td><td>EC2 in Private Subnet</td></tr>
<tr>
<td>DB</td><td>MySQL</td><td>EC2 in Private Subnet</td></tr>
<tr>
<td>Network</td><td>VPC, Subnets, Route Tables</td><td>AWS VPC</td></tr>
<tr>
<td>Others</td><td>NAT Gateway, IGW, SGs</td><td>AWS Infra</td></tr>
</tbody>
</table>
</div><h2 id="heading-high-level-architecture-diagram">🗺️ High-Level Architecture Diagram</h2>
<p>This setup includes:</p>
<ul>
<li><p><strong>1 Public Subnet</strong> for Nginx</p>
</li>
<li><p><strong>2 Private Subnets</strong>: App Tier (Tomcat) and DB Tier (MySQL)</p>
</li>
<li><p><strong>Security Groups</strong> with limited, directional access</p>
</li>
<li><p><strong>NAT Gateway</strong> for outbound internet access from private subnets</p>
</li>
</ul>
<h2 id="heading-step-by-step-implementation">🪜 Step-by-Step Implementation</h2>
<h3 id="heading-step-1-create-vpc">✅ Step 1: Create VPC</h3>
<ul>
<li><strong>CIDR Block:</strong> <code>10.1.0.0/16</code></li>
</ul>
<h3 id="heading-step-2-create-subnets">✅ Step 2: Create Subnets</h3>
<ul>
<li><p><code>10.1.1.0/24</code> – Public (Web: Nginx)</p>
</li>
<li><p><code>10.1.2.0/24</code> – Private (App: Tomcat)</p>
</li>
<li><p><code>10.1.3.0/24</code> – Private (DB: MySQL)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733258149/a96e39bc-bdba-4792-b84d-f806488bd2ca.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-3-setup-internet-gateway-nat">✅ Step 3: Setup Internet Gateway + NAT</h3>
<ul>
<li><p>IGW for public subnet (Nginx)</p>
</li>
<li><p>NAT Gateway for public subnets (Web)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733354085/163f7353-1a72-4e0a-9140-f97c4fb1b2c0.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733337592/792f8e51-1b1c-4f60-a1e9-2fa0d0a7429b.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-4-configure-route-tables">✅ Step 4: Configure Route Tables</h3>
<ul>
<li><p>Public Route Table: <code>0.0.0.0/0 → IGW</code></p>
</li>
<li><p>Private Route Table: <code>0.0.0.0/0 → NAT</code></p>
</li>
</ul>
<h3 id="heading-a-public-route-table-configuration">✅ A. Public Route Table Configuration</h3>
<ul>
<li><p>I created a <strong>Route Table</strong> named <code>web-rt</code>.</p>
</li>
<li><p>I <strong>associated</strong> the <strong>public subnet</strong> (used for Nginx) with this <code>web-rt</code>.</p>
</li>
<li><p>Then, I edited the route in <code>web-rt</code>:</p>
<ul>
<li><p><strong>Destination</strong>: <code>0.0.0.0/0</code> (this allows internet traffic)</p>
</li>
<li><p><strong>Target</strong>: <strong>Internet Gateway</strong> (attached to the VPC)</p>
</li>
</ul>
</li>
<li><p>This allows public instances like Nginx server to access the internet directly.</p>
</li>
</ul>
<h3 id="heading-b-private-route-table-configuration">🔒 B. Private Route Table Configuration</h3>
<ul>
<li><p>I created another <strong>Route Table</strong> named <code>private-rt</code>.</p>
</li>
<li><p>I <strong>associated both private subnets</strong> (one for Tomcat app and one for MySQL DB) with this <code>private-rt</code>.</p>
</li>
<li><p>Then, I edited the route in <code>private-rt</code>:</p>
<ul>
<li><p><strong>Destination</strong>: <code>0.0.0.0/0</code></p>
</li>
<li><p><strong>Target</strong>: <strong>NAT Gateway</strong> (deployed in the public subnet)</p>
</li>
</ul>
</li>
<li><p>This setup allows private instances to <strong>access the internet only for updates</strong> (e.g., apt install), <strong>without being exposed</strong> to incoming public traffic.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733665222/d1317054-0d22-49e3-b23b-572ad27446f7.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733721561/cdda6581-3208-4eea-9a93-92382065a8a2.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733648562/e2b00add-4fe6-41b8-b613-0dbf50831383.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733690138/9ce0dcd5-87b9-481e-bf28-5eb7eb092ebb.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-5-launch-ec2-instances">✅ Step 5: Launch EC2 Instances</h3>
<h4 id="heading-web-tier-nginx">🌍 Web Tier (Nginx)</h4>
<ul>
<li><p>EC2 in public subnet</p>
</li>
<li><p>Installed <strong>Nginx</strong></p>
</li>
<li><p>Acts as <strong>Reverse Proxy</strong> forwarding to Tomcat</p>
</li>
<li><p>SG allows ports: <strong>80 (HTTP)</strong> and <strong>8080 (proxy)</strong></p>
</li>
</ul>
<h4 id="heading-bastion-host">🛡 Bastion Host</h4>
<ul>
<li><p>EC2 in public subnet for SSH access to private EC2s</p>
</li>
<li><p>SG allows port: <strong>22</strong></p>
</li>
</ul>
<h4 id="heading-app-tier-tomcat">⚙ App Tier (Tomcat)</h4>
<ul>
<li><p>EC2 in private subnet</p>
</li>
<li><p>Installed <strong>Apache Tomcat</strong></p>
</li>
<li><p>SG allows traffic only from Nginx EC2 (Web SG)</p>
</li>
</ul>
<h4 id="heading-db-tier-mysql">💾 DB Tier (MySQL)</h4>
<ul>
<li><p>EC2 in private subnet</p>
</li>
<li><p>Installed <strong>MySQL</strong>, secured</p>
</li>
<li><p>SG allows port <strong>3306</strong> only from App Server</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733817188/f9bffe6b-9f9e-4506-862f-e80eeebb120f.png?auto=compress,format&amp;format=webp" alt /></p>
<h4 id="heading-step-3-private-server-access-via-public-ec2-jump-server-method">🔐 Step 3: <strong>Private Server Access via Public EC2 (Jump Server Method)</strong></h4>
<p>Since <strong>App</strong> and <strong>DB</strong> servers are in <strong>private subnets</strong>, I used the <strong>Web Server (Public EC2)</strong> as a <strong>jump host</strong> to access them.</p>
<p><strong>Steps Followed:</strong></p>
<ol>
<li><p>SSH into Public Web Server using <code>.pem</code> file:</p>
<pre><code class="lang-bash">  ssh -i <span class="hljs-string">"my-key.pem"</span> ubuntu@&lt;public_ip&gt;
</code></pre>
</li>
<li><p>Created a new file using:</p>
<pre><code class="lang-bash">  vim jump.pem
</code></pre>
</li>
<li><p>Pasted the <strong>private key</strong> of the internal servers (App/DB) in <code>jump.pem</code>.</p>
</li>
<li><p>Changed permission:</p>
<pre><code class="lang-bash">  chmod 400 jump.pem
</code></pre>
</li>
<li><p>Then, from the public server, logged in to private server:</p>
<pre><code class="lang-bash">  ssh -i jump.pem ubuntu@10.1.2.97  <span class="hljs-comment"># App Server</span>
  ssh -i jump.pem ubuntu@10.1.3.105 <span class="hljs-comment"># DB Server</span>
</code></pre>
</li>
</ol>
<p>✅ <em>This way, I securely accessed private servers using the public EC2 as a jump point.</em></p>
<h2 id="heading-nginx-installation-on-web-ec2-instance-public-subnet">🌐 Nginx Installation on Web EC2 Instance (Public Subnet)</h2>
<p>After launching the <strong>Web EC2 instance</strong> (in the <strong>public subnet</strong>), I connected to it using <strong>SSH</strong> with the default Ubuntu user. Then I installed and configured <strong>Nginx</strong> as follows:</p>
<h3 id="heading-steps-performed">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>SSH into EC2 Instance:</strong></p>
<pre><code class="lang-bash">  ssh -i <span class="hljs-string">"keypair.pem"</span> ubuntu@&lt;Public-IP&gt;
</code></pre>
</li>
<li><p><strong>Update the System Packages:</strong></p>
<pre><code class="lang-bash">  sudo apt update -y
</code></pre>
</li>
<li><p><strong>Install Nginx Web Server:</strong></p>
<pre><code class="lang-bash">  sudo apt install nginx -y
</code></pre>
</li>
<li><p><strong>Start the Nginx Service:</strong></p>
<pre><code class="lang-bash">  sudo systemctl start nginx
</code></pre>
</li>
<li><p><strong>Enable Nginx to Start on Boot:</strong></p>
<pre><code class="lang-bash">  sudo systemctl <span class="hljs-built_in">enable</span> nginx
</code></pre>
</li>
</ol>
<h2 id="heading-tomcat-installation-on-app-ec2-instance-private-subnet">🚀 Tomcat Installation on App EC2 Instance (Private Subnet)</h2>
<p>On the <strong>App Server EC2 instance</strong> (launched in the <strong>private subnet</strong>), I installed and started <strong>Apache Tomcat</strong> to host Java-based web applications.</p>
<p>Since Tomcat requires Java, I installed JDK first, then downloaded and configured Tomcat.</p>
<h3 id="heading-steps-performed-1">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>Update System Packages:</strong></p>
<pre><code class="lang-bash">  sudo apt update -y
</code></pre>
</li>
<li><p><strong>Install Java Development Kit (Required for Tomcat):</strong></p>
<pre><code class="lang-bash">  sudo apt install default-jdk -y
</code></pre>
</li>
<li><p><strong>Download the Latest Tomcat (Version 11.0.9):</strong></p>
<pre><code class="lang-bash">  wget https://downloads.apache.org/tomcat/tomcat-11/v11.0.9/bin/apache-tomcat-11.0.9.tar.gz.asc
</code></pre>
</li>
<li><p><strong>Extract the Downloaded Archive:</strong></p>
<pre><code class="lang-bash">  tar -xvzf apache-tomcat-11.0.9.tar.gz
</code></pre>
</li>
<li><p><strong>Start the Tomcat Server:</strong></p>
<pre><code class="lang-bash">  ls
  <span class="hljs-built_in">cd</span> apache-tomcat-11.0.9.tar.gz
  ls
  <span class="hljs-built_in">cd</span> bin
  ./startup.sh
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752734185866/7fa6ce94-83da-4f32-a8f1-e5d8579dc7d6.png?auto=compress,format&amp;format=webp" alt /></p>
<h2 id="heading-mysql-installation-on-db-ec2-instance-private-subnet">🛢️ MySQL Installation on DB EC2 Instance (Private Subnet)</h2>
<p>On the <strong>Database Server EC2 instance</strong> (placed in the <strong>private subnet</strong>), I installed <strong>MySQL Server</strong> to manage the backend database of the application securely.</p>
<h3 id="heading-steps-performed-2">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>Update System Packages:</strong></p>
<pre><code class="lang-bash">  sudo apt update -y
</code></pre>
</li>
<li><p><strong>Install MySQL Server:</strong></p>
<pre><code class="lang-bash">  sudo apt install mysql-server -y
</code></pre>
</li>
<li><p><strong>Start and Enable MySQL Service:</strong></p>
<pre><code class="lang-bash">  sudo systemctl start mysql
  sudo systemctl <span class="hljs-built_in">enable</span> mysql
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752736736398/c7545c1f-fd7a-4068-8c6e-87c2d777fd48.png?auto=compress,format&amp;format=webp" alt /></p>
<h2 id="heading-mysql-configuration-on-db-ec2-private-subnet">⚙️ MySQL Configuration on DB EC2 (Private Subnet)</h2>
<p>After installing MySQL on the <strong>DB Server (private subnet)</strong>, I performed additional configuration to allow <strong>internal app server access</strong> by setting the <strong>bind-address</strong> to the server’s <strong>private IP</strong>.</p>
<h3 id="heading-step-1-login-to-mysql-as-root-user">🔐 Step 1: Login to MySQL as Root User</h3>
<p>To securely access MySQL, I logged in using the root user:</p>
<pre><code class="lang-bash">sudo mysql -u root -p
</code></pre>
<p><em>(You’ll be prompted to enter the root password set during secure installation.)</em></p>
<hr />
<h3 id="heading-step-2-edit-mysql-configuration-file">🛠️ Step 2: Edit MySQL Configuration File</h3>
<p>I modified the <strong>MySQL bind-address</strong> to allow access from the app server (within the VPC):</p>
<pre><code class="lang-bash">sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf
</code></pre>
<p>Inside this file, I located the following line:</p>
<pre><code class="lang-bash">bind-address = 127.0.0.1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752734455242/e0c1296c-376a-4a5f-9ee4-1459500bcc30.png?auto=compress,format&amp;format=webp" alt /></p>
<p>And changed it to my <strong>DB EC2’s private IP</strong>, for example:</p>
<pre><code class="lang-bash">bind-address = 10.0.2.15
</code></pre>
<blockquote>
<p><strong><em>✅ This step ensures MySQL accepts connections only from internal sources (e.g., the app server), not from the public internet — keeping the database secure.</em></strong></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752734479134/cc97d642-9061-49f7-9e1a-2a7e1ba9ae7e.png?auto=compress,format&amp;format=webp" alt /></p>
<hr />
<h3 id="heading-step-3-restart-mysql-to-apply-changes">🔄 Step 3: Restart MySQL to Apply Changes</h3>
<pre><code class="lang-bash">sudo systemctl restart mysql
</code></pre>
<h2 id="heading-network-connectivity-testing-ping-amp-telnet">🔗 Network Connectivity Testing (Ping &amp; Telnet)</h2>
<p>To ensure all the instances in my 3-tier architecture (Web, App, and DB) are properly connected and communicating within the VPC, I performed two essential network checks:</p>
<hr />
<h3 id="heading-1-ping-test-initial-connectivity-verification">✅ 1. <strong>Ping Test (Initial Connectivity Verification)</strong></h3>
<p>Before running any application-level commands, I verified the basic connectivity between all EC2 instances using <code>ping</code>.</p>
<ul>
<li><p>From <strong>Web (10.1.1.44)</strong>:</p>
<ul>
<li><p>Ping to App server (<code>10.1.2.97</code>)</p>
</li>
<li><p>Ping to DB server (<code>10.1.3.105</code>)</p>
</li>
</ul>
</li>
<li><p>From <strong>App (10.1.2.97)</strong>:</p>
<ul>
<li><p>Ping to Web server (<code>10.1.1.44</code>)</p>
</li>
<li><p>Ping to DB server (<code>10.1.3.105</code>)</p>
</li>
</ul>
</li>
<li><p>From <strong>DB (10.1.3.105)</strong>:</p>
<ul>
<li><p>Ping to Web server (<code>10.1.1.44</code>)</p>
</li>
<li><p>Ping to App server (<code>10.1.2.97</code>)</p>
</li>
</ul>
</li>
</ul>
<blockquote>
<p><strong><em>📝 All ping tests were successful, confirming that the subnet routing and security group rules were correctly configured for basic communication.</em></strong></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735954614/dbcd1f78-5b0a-48b6-848b-e1192acc8a98.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-verifying-internal-connectivity-using-telnet-in-3-tier-architecture">✅ Verifying Internal Connectivity using Telnet in 3-Tier Architecture</h3>
<p>To ensure all components in the 3-Tier Architecture (Web, App, and DB) can communicate with each other, we use the <code>telnet</code> command to test port-level connectivity.</p>
<p>Below is how each instance should verify connection with others using <strong>Telnet</strong>:</p>
<hr />
<h4 id="heading-from-web-server-public-subnet-ip-101144">🔹 From Web Server (Public Subnet - IP: <code>10.1.1.44</code>)</h4>
<ul>
<li><p>Check connectivity to <strong>App Server (Tomcat)</strong>:</p>
<pre><code class="lang-bash">    telnet 10.1.2.97 8080
</code></pre>
</li>
<li><p>Check connectivity to <strong>Database Server (MySQL)</strong>:</p>
<pre><code class="lang-bash">    telnet 10.1.3.105 3306
</code></pre>
</li>
</ul>
<hr />
<h4 id="heading-from-app-server-private-subnet-ip-101297">🔹 From App Server (Private Subnet - IP: <code>10.1.2.97</code>)</h4>
<ul>
<li><p>Check connectivity to <strong>Web Server</strong>:</p>
<pre><code class="lang-bash">    telnet 10.1.1.44 22
</code></pre>
</li>
<li><p>Check connectivity to <strong>Database Server</strong>:</p>
<pre><code class="lang-bash">    telnet 10.1.3.105 3306
</code></pre>
</li>
</ul>
<hr />
<h4 id="heading-from-db-server-private-subnet-ip-1013105">🔹 From DB Server (Private Subnet - IP: <code>10.1.3.105</code>)</h4>
<ul>
<li><p>Check connectivity to <strong>Web Server</strong>:</p>
<pre><code class="lang-bash">    telnet 10.1.1.44 22
</code></pre>
</li>
<li><p>Check connectivity to <strong>App Server</strong>:</p>
<pre><code class="lang-bash">    telnet 10.1.2.97 8080
</code></pre>
</li>
</ul>
<hr />
<blockquote>
<p><strong><em>✅ If Telnet successfully connects (i.e., blank screen or "Connected"), it confirms that the port is open and reachable from that instance.</em></strong></p>
<p><strong><em>❌ If Telnet fails (i.e., "Connection refused" or "Unable to connect"), check Security Groups, NACLs, and Routing Tables.</em></strong></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735815959/ea072fff-0ca7-4e23-b415-0ebf26884c24.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735820273/75ec1098-e843-40e1-9699-5dec5bbd1138.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735830041/9b9d7673-7a90-4e65-8c0e-9763a4954a54.png?auto=compress,format&amp;format=webp" alt /></p>
<h2 id="heading-conclusion">📝 Conclusion</h2>
<p>This project demonstrates a successful deployment of a secure and scalable 3-tier architecture on AWS using Nginx (Web), Tomcat (App), and MySQL (DB).<br />It follows best practices for network isolation, access control, and modular application deployment in a cloud environment.</p>
]]></content:encoded></item><item><title><![CDATA[100 Days of DevOps – Day 2: Creating a Temporary User with Expiry Date]]></title><description><![CDATA[ask for the Day
Today’s challenge was about User Management in Linux.I had to create a temporary user named anita on App Server 1 in the Stratos Datacenter. The special condition was that this user should expire automatically on 2023-12-07.
This ensu...]]></description><link>https://gurjar-vishal.me/100-days-of-devops-day-2</link><guid isPermaLink="true">https://gurjar-vishal.me/100-days-of-devops-day-2</guid><category><![CDATA[day2]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Sat, 06 Sep 2025 03:04:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757128954074/a3e0f4d3-46c1-40e9-af26-780bc80c87c1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-ask-for-the-day">ask for the Day</h3>
<p>Today’s challenge was about <strong>User Management in Linux</strong>.<br />I had to create a <strong>temporary user</strong> named <code>anita</code> on <strong>App Server 1</strong> in the Stratos Datacenter. The special condition was that this user should <strong>expire automatically on 2023-12-07</strong>.</p>
<p>This ensures security and smooth access management, especially when developers or external people need access for a limited time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757127117024/c15bc7b5-fb47-4139-9b69-238aed76fe54.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-steps-i-followed">Steps I Followed</h3>
<ol>
<li><h4 id="heading-logged-in-to-app-server-1">Logged in to App Server 1</h4>
</li>
</ol>
<p>First, I connected to the jump host and from there accessed <strong>App Server 1</strong>. and password : <code>Ir0nM@n</code> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757127224504/be75254f-c20b-4dd3-988f-a25227c24dbd.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash"><span class="hljs-comment">#ssh user@appserver1</span>
ssh tony@stapp01.stratos.xfusioncorp.com
</code></pre>
<hr />
<ol start="2">
<li><h4 id="heading-created-user-with-expiry-date">Created User with Expiry Date</h4>
</li>
</ol>
<p>To create a user with expiry, Linux provides the <code>-e</code> flag in the <code>useradd</code> command.</p>
<pre><code class="lang-bash">sudo useradd -e 2023-12-07 anita
</code></pre>
<p>Here:</p>
<ul>
<li><p><code>-e 2023-12-07</code> → Sets the expiry date.</p>
</li>
<li><p><code>anita</code> → Username in <strong>lowercase</strong> as per standard protocol.</p>
</li>
</ul>
<hr />
<ol start="3">
<li><h4 id="heading-verified-the-user-expiry">Verified the User Expiry</h4>
</li>
</ol>
<p>I used the <code>chage</code> command to confirm if the expiry was set correctly:</p>
<pre><code class="lang-bash">sudo chage -l anita
</code></pre>
<p>Output showed:</p>
<pre><code class="lang-bash">Account expires : Dec 07, 2023
</code></pre>
<p>✅ Perfect!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757127345317/2f815b2d-1bc2-44f3-8c3a-8e5634a86461.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-learnings-from-day-2">Learnings from Day 2</h3>
<ul>
<li><p>Learned how to create Linux users with expiry dates.</p>
</li>
<li><p>Understood the importance of <strong>temporary access</strong> in real-world DevOps &amp; system administration.</p>
</li>
<li><p>Practiced using <code>useradd</code> and <code>chage</code> commands.</p>
</li>
</ul>
<hr />
<h3 id="heading-whats-next">🔮 What’s Next?</h3>
<p>Tomorrow Day 3: <mark>Secure Root SSH Access</mark></p>
<p>Stay tuned for the <strong>100 Days of DevOps Challenge</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757127401972/24266c94-7282-4dd1-9318-0353c0f0145c.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[100 Days of DevOps — Day 1: Linux User Setup with Non-Interactive Shell]]></title><description><![CDATA[This kind of setup is often used when a user account is needed for system processes, automation, or service integration but should not have direct login access. It’s an important concept in DevOps and system security.

Task Description
At xFusionCorp...]]></description><link>https://gurjar-vishal.me/100-days-of-devops-day-1-linux-user-setup-with-non-interactive-shell</link><guid isPermaLink="true">https://gurjar-vishal.me/100-days-of-devops-day-1-linux-user-setup-with-non-interactive-shell</guid><category><![CDATA[day1]]></category><category><![CDATA[Devops]]></category><category><![CDATA[KodeKloud]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Fri, 05 Sep 2025 16:27:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757129319226/34d1a6fb-5c6d-48ce-a1fd-49cbb2e2552c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This kind of setup is often used when a user account is needed for system processes, automation, or service integration but should not have direct login access. It’s an important concept in DevOps and system security.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757088078291/5408ceb5-c6da-4b51-930c-cf96d16546f4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-task-description">Task Description</h2>
<p>At <strong>xFusionCorp Industries</strong>, the system admin team required me to:</p>
<ul>
<li><p>Create a user named <code>ammar</code></p>
</li>
<li><p>Assign the user a <strong>non-interactive shell</strong></p>
</li>
<li><p>Perform this on <strong>App Server 3</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757088168406/c34ceac4-bd1e-4349-83e4-427dacb740e0.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-steps-amp-commands">Steps &amp; Commands</h2>
<ol>
<li><p><strong>Connect to App Server 3 and add password :</strong> <code>BigGr33n</code></p>
<pre><code class="lang-bash"> ssh banner@stapp03.stratos.xfusioncorp.com
</code></pre>
</li>
<li><p><strong>Create the user with a non-interactive shell</strong></p>
<pre><code class="lang-bash"> sudo useradd -s /sbin/nologin ammar
</code></pre>
<p> Here:</p>
<ul>
<li><p><code>useradd</code> → command to create a new user</p>
</li>
<li><p><code>-s /sbin/nologin</code> → prevents the user from logging in interactively</p>
</li>
</ul>
</li>
<li><p><strong>Verify the user shell</strong></p>
<pre><code class="lang-bash"> grep ammar /etc/passwd
</code></pre>
<p> Output should look like this:</p>
<pre><code class="lang-bash"> ammar:x:1001:1001::/home/ammar:/sbin/nologin
</code></pre>
</li>
</ol>
<h2 id="heading-challenges-faced">Challenges Faced</h2>
<ul>
<li><p>Remembering the difference between <code>/sbin/nologin</code> and <code>/bin/false</code>.</p>
</li>
<li><p>Making sure I was connected to the correct server (<strong>App Server 3</strong>) before running the command.</p>
</li>
</ul>
<h2 id="heading-lessons-learned">Lessons Learned</h2>
<ul>
<li><p><strong>User management</strong> is one of the most important Linux admin skills for DevOps.</p>
</li>
<li><p>Using a <strong>non-interactive shell</strong> ensures security for accounts that exist only for processes, not real users.</p>
</li>
<li><p>Always double-check the <strong>server name</strong> before making changes in multi-server environments.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Day 1 taught me how even simple tasks like user creation play a huge role in <strong>system security</strong> and <strong>infrastructure management</strong>. This was a strong starting point for my DevOps journey.</p>
<p>👉 Next up: <strong>Day 2 – Temporary User Setup with Expiry</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757088456063/6a94c6d3-4aa9-47b5-8a29-06a94d43d282.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Automated S3 Backup & Restore with Shell Script]]></title><description><![CDATA[In this project, I built a secure and automated backup & restore system using AWS S3 and shell scripting. The aim was to back up a local directory and a database to Amazon S3, ensure incremental backups, enable automation, maintain logs, and even sup...]]></description><link>https://gurjar-vishal.me/automated-s3-backup-and-restore-with-shell-script</link><guid isPermaLink="true">https://gurjar-vishal.me/automated-s3-backup-and-restore-with-shell-script</guid><category><![CDATA[Automated S3 Backup & Restore]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Sat, 09 Aug 2025 13:38:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757092417248/341c1163-944e-487f-b8bb-a6de62994ad5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this project, I built a secure and automated backup &amp; restore system using AWS S3 and shell scripting. The aim was to back up a local directory and a database to Amazon S3, ensure incremental backups, enable automation, maintain logs, and even support encryption as a bonus.</p>
<ol>
<li><h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
</li>
</ol>
<ul>
<li><p>AWS Account with S3 bucket</p>
</li>
<li><p>AWS CLI installed &amp; configured</p>
</li>
<li><p>Ubuntu/Linux environment</p>
</li>
<li><p>Basic shell scripting knowledge</p>
</li>
<li><p><code>cronjob</code> (for real-time backup)</p>
</li>
</ul>
<ol start="2">
<li><h2 id="heading-project-requirements-as-per-assignment-pdf"><strong>Project Requirements (as per assignment PDF)</strong></h2>
</li>
</ol>
<ul>
<li><p>Backup a local directory to S3</p>
</li>
<li><p>Incremental backups</p>
</li>
<li><p>Automate backups (scheduled or real-time)</p>
</li>
<li><p>Logging</p>
</li>
<li><p>Restore process</p>
</li>
<li><p>Database backup</p>
</li>
<li><p>Optional encryption before upload</p>
</li>
</ul>
<ol start="3">
<li><h2 id="heading-launch-an-instance"><strong>Launch an Instance:</strong></h2>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754737391944/99edaafc-2913-4617-94cf-b423c708e553.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-setting-up-aws-cli"><strong>Setting up AWS CLI</strong></h3>
<pre><code class="lang-bash">sudo apt update
sudo apt install unzip -y
curl <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> -o <span class="hljs-string">"awscliv2.zip"</span>
unzip awscliv2.zip
sudo ./aws/install
aws configure
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754744628499/d08cca30-61eb-4e7f-9eef-9bc7ff61bb2c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754744561577/973fbe96-7daa-43ef-b8e2-642aef84bc37.png" alt class="image--center mx-auto" /></p>
<ol start="4">
<li><h2 id="heading-creating-s3-bucket">Creating S3 Bucket</h2>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754737538247/7f771b9a-0633-484b-8ebf-2299458a444c.png" alt class="image--center mx-auto" /></p>
<ol start="5">
<li><h3 id="heading-setting-up-the-environment"><strong>Setting up the Environment</strong></h3>
</li>
</ol>
<p>I started by creating a folder called <code>mybackup</code> where all the files to be backed up would be stored:</p>
<pre><code class="lang-bash">mkdir mybackup
</code></pre>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"task1"</span> &gt; vishal.txt
<span class="hljs-built_in">echo</span> <span class="hljs-string">"task2"</span> &gt; vishal1.txt
touch vishal2
</code></pre>
<p>For testing, I created a couple of files:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754739537864/1b6ab2cc-736d-445a-ac8b-ad9a2083ad57.png" alt class="image--center mx-auto" /></p>
<ol start="6">
<li><h3 id="heading-creating-the-backup-script"><strong>Creating the Backup Script</strong></h3>
</li>
</ol>
<p>I wrote a shell script named <a target="_blank" href="http://backup-to-s3.sh"><code>backup-to-s3.sh</code></a> to upload the contents of <code>mybackup</code> to my S3 bucket.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

FOLDER_PATH=<span class="hljs-string">"/home/ubuntu/mybackup"</span>
BUCKET_NAME=<span class="hljs-string">"my-backup-bucket-vishal"</span>
LOG_FILE=<span class="hljs-string">"/home/ubuntu/backup-log.txt"</span>
DATE=$(date +<span class="hljs-string">"%Y-%m-%d %H:%M:%S"</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"[<span class="hljs-variable">$DATE</span>] Backup started..."</span> | tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>

aws s3 sync <span class="hljs-string">"<span class="hljs-variable">$FOLDER_PATH</span>"</span> <span class="hljs-string">"s3://<span class="hljs-variable">$BUCKET_NAME</span>/"</span> \
    --exact-timestamps \
    --storage-class STANDARD_IA \
    &gt;&gt; <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> 2&gt;&amp;1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754738941583/6ab46a8a-6965-4c53-94db-4457ce9e7d53.png" alt class="image--center mx-auto" /></p>
<p>I made it executable:</p>
<pre><code class="lang-bash">chmod +x /home/ubuntu/backup-to-s3.sh
</code></pre>
<ol start="7">
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754739114037/32a8afbd-6719-4431-8cc4-96dfa9ca5488.png" alt class="image--center mx-auto" /></p>
</li>
<li><h3 id="heading-automating-the-backup"><strong>Automating the Backup</strong></h3>
</li>
</ol>
<p>Manual execution wasn’t enough. I needed automation.<br />I used <code>cron</code> to schedule the backup every 2 minutes:</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>Added the line: (for testing)</p>
<pre><code class="lang-bash">*/1 * * * /home/ubuntu/backup-to-s3.sh
</code></pre>
<p>Now, the script runs automatically every 1 minutes and backs up any new or modified files.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754739300960/cca88953-b77c-473c-95f6-095d8aa34cd9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754739338904/0308909c-f27f-4b40-9f25-841338b0cc6e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754739371389/43d7204d-fc3a-47db-90e2-b5e970ff895f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754739364326/1ddb787f-289b-4e27-978e-29291e8e3226.png" alt class="image--center mx-auto" /></p>
<ol start="9">
<li><h3 id="heading-restore-script"><strong>Restore Script</strong></h3>
</li>
</ol>
<p>I also wrote a restore script to pull files from S3 back to my local folder:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

RESTORE_PATH=<span class="hljs-string">"/home/ubuntu/mybackup"</span>
BUCKET_NAME=<span class="hljs-string">"my-backup-bucket-vishal"</span>
LOG_FILE=<span class="hljs-string">"/home/ubuntu/restore-log.txt"</span>
DATE=$(date +<span class="hljs-string">"%Y-%m-%d %H:%M:%S"</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"[<span class="hljs-variable">$DATE</span>] Restore started..."</span> | tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>

aws s3 sync <span class="hljs-string">"s3://<span class="hljs-variable">$BUCKET_NAME</span>/"</span> <span class="hljs-string">"<span class="hljs-variable">$RESTORE_PATH</span>"</span> \
    --exact-timestamps \
    &gt;&gt; <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> 2&gt;&amp;1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754740737187/71766173-91e8-4336-88ed-dbc8776aa016.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754743867126/f81bbc91-27ec-451a-9d7b-6cf639f41bab.png" alt class="image--center mx-auto" /></p>
<p>The files <a target="_blank" href="http://vg.py"><code>vg.py</code></a>, <code>vishal.txt</code>, and <code>vishal2</code> in the <code>mybackup</code> directory are deleted, and then they are restored from the <code>my-backup-bucket-vishal</code> S3 bucket back into the <code>mybackup</code> directory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754743870765/cf6d35f8-12d4-4a75-be66-537a7c3e3a7f.png" alt class="image--center mx-auto" /></p>
<ol start="10">
<li><h3 id="heading-results"><strong>Results</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>Incremental backups</strong>: Only changed or new files are uploaded.</p>
</li>
<li><p><strong>Automation</strong>: Cron ensures backups run on schedule.</p>
</li>
<li><p><strong>Logging</strong>: Both backup and restore processes record their activities.</p>
</li>
<li><p><strong>Restore process</strong>: I can get all files back from S3 whenever needed.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Building a Scalable Netflix Clone: Integrating CI/CD, DevSecOps, and Kubernetes]]></title><description><![CDATA[Hello friends, we will be deploying a Netflix clone. We will be using Jenkins as a CICD tool and deploying our application on a Docker container and Kubernetes Cluster and we will monitor the Jenkins and Kubernetes metrics using Grafana, Prometheus a...]]></description><link>https://gurjar-vishal.me/netflix-clone</link><guid isPermaLink="true">https://gurjar-vishal.me/netflix-clone</guid><category><![CDATA[Netflix Clone]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[sonarqube]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Sun, 06 Jul 2025 04:30:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751652455512/f2adce12-7d0c-45b8-ac23-9a9683f621ee.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello friends, we will be deploying a Netflix clone. We will be using Jenkins as a CICD tool and deploying our application on a Docker container and Kubernetes Cluster and we will monitor the Jenkins and Kubernetes metrics using Grafana, Prometheus and Node exporter. I Hope this detailed blog is useful.</p>
<p><a target="_blank" href="https://github.com/gurjar-vishal/Netflix-clone.git"><strong>CLICK HERE FOR GITHUB REPOSITORY</strong></a></p>
<p><strong>Steps:-</strong></p>
<p>Step 1 — Launch an Ubuntu(22.04) T2 Large Instance</p>
<p>Step 2 — Install Jenkins, Docker and Trivy. Create a Sonarqube Container using Docker.</p>
<p>Step 3 — Create a TMDB API Key.</p>
<p>Step 4 — Install Prometheus and Grafana On the new Server.</p>
<p>Step 5 — Install the Prometheus Plugin and Integrate it with the Prometheus server.</p>
<p>Step 6 — Email Integration With Jenkins and Plugin setup.</p>
<p>Step 7 — Install Plugins like JDK, Sonarqube Scanner, Nodejs, and OWASP Dependency Check.</p>
<p>Step 8 — Create a Pipeline Project in Jenkins using a Declarative Pipeline</p>
<p>Step 9 — Install OWASP Dependency Check Plugins</p>
<p>Step 10 — Docker Image Build and Push</p>
<p>Step 11 — Deploy the image using Docker</p>
<p>Step 12 — Kubernetes master and slave setup on Ubuntu (20.04)</p>
<p>Step 13 — Access the Netflix app on the Browser.</p>
<p>Step 14 — Terminate the AWS EC2 Instances.</p>
<p><strong>Now, let’s get started and dig deeper into each of these steps:-</strong></p>
<h3 id="heading-step-1-launch-an-ubuntu2204-t2-large-instance"><strong>STEP 1 : Launch an Ubuntu(22.04) T2 Large Instance</strong></h3>
<p>Launch an AWS T2 Large Instance. Use the image as Ubuntu. You can create a new key pair or use an existing one. Enable HTTP and HTTPS settings in the Security Group and open all ports (not best case to open all ports but just for learning purposes it’s okay).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751772893701/cefd61b5-0b1f-4e0a-a7b8-6a3973df1b0e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-install-jenkins-docker-and-trivy"><strong>Step 2 — Install Jenkins, Docker and Trivy</strong></h3>
<h3 id="heading-2a-to-install-jenkins"><strong>2A — To Install Jenkins</strong></h3>
<p>Connect to your console, and enter these commands to Install Jenkins</p>
<pre><code class="lang-bash">vi jenkins.sh <span class="hljs-comment">#make sure run in Root (or) add at userdata while ec2 launch</span>
</code></pre>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
sudo apt update -y
<span class="hljs-comment">#sudo apt upgrade -y</span>
wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | tee /etc/apt/keyrings/adoptium.asc
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb <span class="hljs-subst">$(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release)</span> main"</span> | tee /etc/apt/sources.list.d/adoptium.list
sudo apt update -y
sudo apt install temurin-17-jdk -y
/usr/bin/java --version
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
                  /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
<span class="hljs-built_in">echo</span> deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
                  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
                              /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
sudo apt-get update -y
sudo apt-get install jenkins -y
sudo systemctl start jenkins
sudo systemctl status jenkins
</code></pre>
<pre><code class="lang-bash">sudo chmod 777 jenkins.sh
./jenkins.sh    <span class="hljs-comment"># this will installl jenkins</span>
</code></pre>
<p>Once Jenkins is installed, you will need to go to your AWS EC2 Security Group and open Inbound Port 8080, since Jenkins works on Port 8080.</p>
<p>Now, grab your Public IP Address</p>
<pre><code class="lang-bash">&lt;EC2 Public IP Address:8080&gt;
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751772959328/33a02437-03b9-4f4c-b589-d0f7709bfc48.png" alt /></p>
<p>Unlock Jenkins using an administrative password and install the suggested plugins.</p>
<p>Jenkins will now get installed and install all the libraries.</p>
<p>Create a user click on save and continue.</p>
<p>Jenkins Getting Started Screen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773239995/7e293fd6-f7eb-491c-ba54-fcd8488ef49f.png" alt /></p>
<h3 id="heading-2b-install-docker"><strong>2B — Install Docker</strong></h3>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get install docker.io -y
sudo usermod -aG docker <span class="hljs-variable">$USER</span>   <span class="hljs-comment">#my case is ubuntu</span>
newgrp docker
sudo chmod 777 /var/run/docker.sock
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773394618/89aacf89-9953-4cde-9f97-908099dd3c63.png" alt class="image--center mx-auto" /></p>
<p>After the docker installation, we create a sonarqube container (Remember to add 9000 ports in the security group).</p>
<pre><code class="lang-bash">docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
</code></pre>
<p>Now our sonarqube is up and running</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773019667/dee428fc-83a5-423a-b35c-d9c69bc8c087.png" alt /></p>
<p>Enter username and password, click on login and change password</p>
<pre><code class="lang-bash">username admin
password admin
</code></pre>
<p>Update New password, This is Sonar Dashboard.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773287806/57fac365-4d4e-46ad-afa9-3620675d7d30.png" alt /></p>
<h3 id="heading-2c-install-trivy"><strong>2C — Install Trivy</strong></h3>
<pre><code class="lang-bash">vi trivy.sh
</code></pre>
<pre><code class="lang-bash">sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg &gt; /dev/null
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb <span class="hljs-subst">$(lsb_release -sc)</span> main"</span> | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
</code></pre>
<h3 id="heading-step-3-create-a-tmdb-api-key"><strong>Step 3: Create a TMDB API Key</strong></h3>
<p>Next, we will create a TMDB API key</p>
<p>Open a new tab in the Browser and search for TMDB</p>
<p>Click on the first result, you will see this page</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773479523/ae47f355-4389-4c51-be9c-31cf8a07b96c.png" alt /></p>
<p>Click on the Login on the top right. You will get this page.</p>
<p>You need to create an account here. click on click here. I have account that’s why i added my details there.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773513671/f4a0abc8-6889-47b2-9dcd-5207922eb987.png" alt /></p>
<p>once you create an account you will see this page.</p>
<p>Let’s create an API key, By clicking on your profile and clicking settings.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773562729/06180d3e-7768-4457-bb73-749d05a935db.png" alt /></p>
<p>Now click on API from the left side panel.</p>
<p>Now click on create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696676552066/d768cbba-5c5c-44c9-8692-14276afa7516.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Click on Developer</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696676567781/7f92bfb0-f76c-47c0-9f7b-716cfa8e617d.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now you have to accept the terms and conditions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696676601846/c2b4a1c7-e72a-405c-82a5-cfd5fc454403.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Provide basic details</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696676624029/ac7b685f-3fae-449c-977f-717e406a4933.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696676646550/64355217-f9c5-4267-aa09-b3ca03b922a3.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Click on submit and you will get your API key.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696676675438/de5b5e7b-370e-4d73-874f-842451e2d508.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-4-install-prometheus-and-grafana-on-the-new-server"><strong>Step 4 — Install Prometheus and Grafana On the new Server</strong></h3>
<p>First of all, let’s create a dedicated Linux user sometimes called a system account for Prometheus. Having individual users for each service serves two main purposes:</p>
<p>It is a security measure to reduce the impact in case of an incident with the service.</p>
<p>It simplifies administration as it becomes easier to track down what resources belong to which service.</p>
<p>To create a system user or system account, run the following command:</p>
<pre><code class="lang-bash">sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/<span class="hljs-literal">false</span> prometheus
</code></pre>
<p>–system – Will create a system account.<br />–no-create-home – We don’t need a home directory for Prometheus or any other system accounts in our case.<br />–shell /bin/false – It prevents logging in as a Prometheus user.<br />Prometheus – Will create a Prometheus user and a group with the same name.</p>
<p>Let’s check the latest version of Prometheus from the <a target="_blank" href="https://prometheus.io/download/"><strong>download page</strong></a>.</p>
<p>You can use the curl or wget command to download Prometheus.</p>
<pre><code class="lang-bash">wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773632463/da25fc15-43d8-40c5-9dd7-bbf29496e84c.png" alt /></p>
<p>Then, we need to extract all Prometheus files from the archive.</p>
<pre><code class="lang-bash">tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773654970/e31556c9-acc7-40b4-b2b2-dd32f0b04a13.png" alt /></p>
<p>Usually, you would have a disk mounted to the data directory. For this tutorial, I will simply create a /data directory. Also, you need a folder for Prometheus configuration files.</p>
<pre><code class="lang-bash">sudo mkdir -p /data /etc/prometheus
</code></pre>
<p>Now, let’s change the directory to Prometheus and move some files.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> prometheus-2.47.1.linux-amd64/
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773692328/640d4702-445c-4b8e-9495-dcc3d355f2a8.png" alt /></p>
<p>First of all, let’s move the Prometheus binary and a promtool to the /usr/local/bin/. promtool is used to check configuration files and Prometheus rules.</p>
<pre><code class="lang-bash">sudo mv prometheus promtool /usr/<span class="hljs-built_in">local</span>/bin/
</code></pre>
<p>Optionally, we can move console libraries to the Prometheus configuration directory. Console templates allow for the creation of arbitrary consoles using the Go templating language. You don’t need to worry about it if you’re just getting started.</p>
<pre><code class="lang-bash">sudo mv consoles/ console_libraries/ /etc/prometheus/
</code></pre>
<p>Finally, let’s move the example of the main Prometheus configuration file.</p>
<pre><code class="lang-bash">sudo mv prometheus.yml /etc/prometheus/prometheus.yml
</code></pre>
<p>To avoid permission issues, you need to set the correct ownership for the /etc/prometheus/ and data directory.</p>
<pre><code class="lang-bash">sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773756915/84b5ef43-a961-433e-89c5-338988859271.png" alt /></p>
<p>You can delete the archive and a Prometheus folder when you are done.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span>
rm -rf prometheus-2.47.1.linux-amd64.tar.gz
</code></pre>
<p>Verify that you can execute the Prometheus binary by running the following command:</p>
<pre><code class="lang-bash">prometheus --version
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773832253/85749cc3-957e-4c27-a686-18932504b1da.png" alt class="image--center mx-auto" /></p>
<p>To get more information and configuration options, run Prometheus Help.</p>
<pre><code class="lang-bash">prometheus --<span class="hljs-built_in">help</span>
</code></pre>
<p>We’re going to use some of these options in the service definition.</p>
<p>We’re going to use Systemd, which is a system and service manager for Linux operating systems. For that, we need to create a Systemd unit configuration file.</p>
<pre><code class="lang-bash">sudo vim /etc/systemd/system/prometheus.service
</code></pre>
<p><strong>Prometheus.service</strong></p>
<pre><code class="lang-bash">[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/<span class="hljs-built_in">local</span>/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/data \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:9090 \
  --web.enable-lifecycle
[Install]
WantedBy=multi-user.target
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696602042927/d7a1a28b-cc48-42e7-beaa-2b54ceb8371b.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Let’s go over a few of the most important options related to Systemd and Prometheus. Restart – Configures whether the service shall be restarted when the service process exits, is killed, or a timeout is reached.<br />RestartSec – Configures the time to sleep before restarting a service.<br />User and Group – Are Linux user and a group to start a Prometheus process.<br />–config.file=/etc/prometheus/prometheus.yml – Path to the main Prometheus configuration file.<br />–storage.tsdb.path=/data – Location to store Prometheus data.<br />–web.listen-address=0.0.0.0:9090 – Configure to listen on all network interfaces. In some situations, you may have a proxy such as nginx to redirect requests to Prometheus. In that case, you would configure Prometheus to listen only on <a target="_blank" href="http://localhost/"><strong>localhost</strong></a>.<br />–web.enable-lifecycle — Allows to manage Prometheus, for example, to reload configuration without restarting the service.</p>
<p>To automatically start the Prometheus after reboot, run enable.</p>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> prometheus
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773932741/f5f11e9e-245b-43f3-8eac-ef1e9b536098.png" alt class="image--center mx-auto" /></p>
<p>Then just start the Prometheus.</p>
<pre><code class="lang-bash">sudo systemctl start prometheus
</code></pre>
<p>To check the status of Prometheus run the following command:</p>
<pre><code class="lang-bash">sudo systemctl status prometheus
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773948516/6afcbc97-36d2-4a17-b34d-15397a70a5b7.png" alt class="image--center mx-auto" /></p>
<p>Suppose you encounter any issues with Prometheus or are unable to start it. The easiest way to find the problem is to use the journalctl command and search for errors.</p>
<pre><code class="lang-bash">journalctl -u prometheus -f --no-pager
</code></pre>
<p>Now we can try to access it via the browser. I’m going to be using the IP address of the Ubuntu server. You need to append port 9090 to the IP.</p>
<pre><code class="lang-bash">&lt;public-ip:9090&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773970507/04946ec7-08f0-41d2-8c4e-94b1014d3045.png" alt class="image--center mx-auto" /></p>
<p>If you go to targets, you should see only one – Prometheus target. It scrapes itself every 15 seconds by default.</p>
<h3 id="heading-install-node-exporter-on-ubuntu-2204"><strong>Install Node Exporter on Ubuntu 22.04</strong></h3>
<p>Next, we’re going to set up and configure Node Exporter to collect Linux system metrics like CPU load and disk I/O. Node Exporter will expose these as Prometheus-style metrics. Since the installation process is very similar, I’m not going to cover as deep as Prometheus.</p>
<p>First, let’s create a system user for Node Exporter by running the following command:</p>
<pre><code class="lang-bash">sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/<span class="hljs-literal">false</span> node_exporter
</code></pre>
<p>You can <a target="_blank" href="https://prometheus.io/download/"><strong>download Node Exporter</strong></a> from the same page.</p>
<p>Use the wget command to download the binary.</p>
<pre><code class="lang-bash">wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751773997337/d222f2dd-f7f0-470a-a78b-a974b184143b.png" alt class="image--center mx-auto" /></p>
<p>Extract the node exporter from the archive.</p>
<pre><code class="lang-bash">tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774007897/1a624376-6a6a-4871-b8d1-2fea0ed82914.png" alt class="image--center mx-auto" /></p>
<p>Move binary to the /usr/local/bin.</p>
<pre><code class="lang-bash">sudo mv \
  node_exporter-1.6.1.linux-amd64/node_exporter \
  /usr/<span class="hljs-built_in">local</span>/bin/
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774018736/54bc5748-b70f-4e2f-bab1-93c6917a9b28.png" alt class="image--center mx-auto" /></p>
<p>Clean up, and delete node_exporter archive and a folder.</p>
<pre><code class="lang-bash">rm -rf node_exporter*
</code></pre>
<p>Verify that you can run the binary.</p>
<pre><code class="lang-bash">node_exporter --version
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774048869/6fa4099d-405f-4306-a12d-865d37428dbc.png" alt class="image--center mx-auto" /></p>
<p>Node Exporter has a lot of plugins that we can enable. If you run Node Exporter help you will get all the options.</p>
<pre><code class="lang-bash">node_exporter --<span class="hljs-built_in">help</span>
</code></pre>
<p>–collector.logind We’re going to enable the login controller, just for the demo.</p>
<p>Next, create a similar systemd unit file.</p>
<pre><code class="lang-bash">sudo vim /etc/systemd/system/node_exporter.service
</code></pre>
<p><strong>node_exporter.service</strong></p>
<pre><code class="lang-bash">[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/<span class="hljs-built_in">local</span>/bin/node_exporter \
    --collector.logind
[Install]
WantedBy=multi-user.target
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696609165286/5264f204-d2f4-4f24-a88f-bb3e871fc863.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Replace Prometheus user and group to node_exporter, and update the ExecStart command.</p>
<p>To automatically start the Node Exporter after reboot, enable the service.</p>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> node_exporter
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774155659/4ba37197-9e67-49b6-a3c8-478eaecc4f13.png" alt class="image--center mx-auto" /></p>
<p>Then start the Node Exporter.</p>
<pre><code class="lang-bash">sudo systemctl start node_exporter
</code></pre>
<p>Check the status of Node Exporter with the following command:</p>
<pre><code class="lang-bash">sudo systemctl status node_exporter
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774175594/d4b51c82-8c1a-4752-844a-7546503d5e48.png" alt class="image--center mx-auto" /></p>
<p>If you have any issues, check logs with journalctl</p>
<pre><code class="lang-bash">journalctl -u node_exporter -f --no-pager
</code></pre>
<p>At this point, we have only a single target in our Prometheus. There are many different service discovery mechanisms built into Prometheus. For example, Prometheus can dynamically discover targets in AWS, GCP, and other clouds based on the labels. In the following tutorials, I’ll give you a few examples of deploying Prometheus in a cloud-specific environment. For this tutorial, let’s keep it simple and keep adding static targets. Also, I have a lesson on how to deploy and manage Prometheus in the Kubernetes cluster.</p>
<p>To create a static target, you need to add job_name with static_configs.</p>
<pre><code class="lang-bash">sudo vim /etc/prometheus/prometheus.yml
</code></pre>
<p><strong>prometheus.yml</strong></p>
<pre><code class="lang-bash">- job_name: node_export
    static_configs:
      - targets: [<span class="hljs-string">"localhost:9100"</span>]
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696609076020/4394a954-0bb2-4b6f-9298-836803875641.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>By default, Node Exporter will be exposed on port 9100.</p>
<p>Since we enabled lifecycle management via API calls, we can reload the Prometheus config without restarting the service and causing downtime.</p>
<p>Before, restarting check if the config is valid.</p>
<pre><code class="lang-bash">promtool check config /etc/prometheus/prometheus.yml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774202519/fea6e65b-35db-4e38-b587-7c4578ca3b82.png" alt class="image--center mx-auto" /></p>
<p>Then, you can use a POST request to reload the config.</p>
<pre><code class="lang-bash">curl -X POST http://localhost:9090/-/reload
</code></pre>
<p>Check the targets section</p>
<pre><code class="lang-bash">http://&lt;ip&gt;:9090/targets
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774220466/8a7c5ad0-747f-468c-9047-3a42c975abec.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-install-grafana-on-ubuntu-2204"><strong>Install Grafana on Ubuntu 22.04</strong></h3>
<p>To visualize metrics we can use Grafana. There are many different data sources that Grafana supports, one of them is Prometheus.</p>
<p>First, let’s make sure that all the dependencies are installed.</p>
<pre><code class="lang-bash">sudo apt-get install -y apt-transport-https software-properties-common
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774234075/23398815-e112-46bf-b729-3cb052dc71ce.png" alt class="image--center mx-auto" /></p>
<p>Next, add the GPG key.</p>
<pre><code class="lang-bash">wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774246709/5b94f0ab-ae5c-452e-86eb-2d81b91b5ef6.png" alt class="image--center mx-auto" /></p>
<p>Add this repository for stable releases.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"deb https://packages.grafana.com/oss/deb stable main"</span> | sudo tee -a /etc/apt/sources.list.d/grafana.list
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774259883/54e0d46c-6e37-44ea-be08-7384b0bc7639.png" alt class="image--center mx-auto" /></p>
<p>After you add the repository, update and install Garafana.</p>
<pre><code class="lang-bash">sudo apt-get update
</code></pre>
<pre><code class="lang-bash">sudo apt-get -y install grafana
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774280656/d1de0270-339d-4dd0-85b7-df9d7bad4cf5.png" alt class="image--center mx-auto" /></p>
<p>To automatically start the Grafana after reboot, enable the service.</p>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> grafana-server
</code></pre>
<p>Then start the Grafana.</p>
<pre><code class="lang-bash">sudo systemctl start grafana-server
</code></pre>
<p>To check the status of Grafana, run the following command:</p>
<pre><code class="lang-bash">sudo systemctl status grafana-server
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774298816/b3808a09-8b63-4819-ae0d-363e861cb9ab.png" alt class="image--center mx-auto" /></p>
<p>Go to <code>http://&lt;ip&gt;:3000</code> and log in to the Grafana using default credentials. The username is admin, and the password is admin as well.</p>
<pre><code class="lang-bash">username admin
password admin
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774323671/d9553c2a-1ef6-42a6-bdca-590ce1b1d0f5.png" alt class="image--center mx-auto" /></p>
<p>When you log in for the first time, you get the option to change the password.</p>
<p>To visualize metrics, you need to add a data source first.</p>
<p>Click Add data source and select Prometheus.</p>
<p>For the URL, enter <a target="_blank" href="http://localhost:9090/"><strong>localhost:9090</strong></a> and click Save and test. You can see Data source is working.</p>
<pre><code class="lang-bash">&lt;public-ip:9090&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774420269/a125a4ea-33b0-43c0-bf37-6fab03f61197.png" alt class="image--center mx-auto" /></p>
<p>Click on Save and Test.</p>
<p>Let’s add Dashboard for a better view</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774408361/b6f9ded5-4fc9-43bf-acf9-218108765e41.png" alt class="image--center mx-auto" /></p>
<p>Click on Import Dashboard paste this code <mark>1860</mark> and click on load</p>
<p>Select the Datasource and click on Import</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774396418/7a7ff9aa-2d53-44b5-b858-4eae4974bf1b.png" alt class="image--center mx-auto" /></p>
<p>You will see this output</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774386392/fcf5bc60-bbb7-49f6-986b-35cdb32c7af4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-install-the-prometheus-plugin-and-integrate-it-with-the-prometheus-server"><strong>Step 5 — Install the Prometheus Plugin and Integrate it with the Prometheus server</strong></h3>
<p>Let’s Monitor <mark>JENKINS SYSTEM</mark></p>
<p>Need Jenkins up and running machine</p>
<p>Goto Manage Jenkins –&gt; Plugins –&gt; Available Plugins</p>
<p>Search for <code>Prometheus and install it</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751776690766/55ed196b-a443-48e1-b355-76052b3f3281.png" alt class="image--center mx-auto" /></p>
<p>Once that is done you will Prometheus is set to <code>/Prometheus</code> path in system configurations</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696611956419/c83ac5ca-c7b6-45bf-9daa-149a6e96f784.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Nothing to change click on apply and save</p>
<p>To create a static target, you need to add job_name with static_configs. go to Prometheus server</p>
<pre><code class="lang-bash">sudo vim /etc/prometheus/prometheus.yml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751776705774/a0b14556-9ca3-42ed-a6e7-22cdc723895f.png" alt class="image--center mx-auto" /></p>
<p>Paste below code</p>
<pre><code class="lang-bash">- job_name: <span class="hljs-string">'jenkins'</span>
    metrics_path: <span class="hljs-string">'/prometheus'</span>
    static_configs:
      - targets: [<span class="hljs-string">'&lt;jenkins-ip&gt;:8080'</span>]
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696612070770/6c7bdfc2-6d68-4149-889e-2bd1bfbb0ae3.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Before, restarting check if the config is valid.</p>
<pre><code class="lang-bash">promtool check config /etc/prometheus/prometheus.yml
</code></pre>
<p>Then, you can use a POST request to reload the config.</p>
<pre><code class="lang-bash">curl -X POST http://localhost:9090/-/reload
</code></pre>
<p>Check the targets section</p>
<pre><code class="lang-bash">http://&lt;ip&gt;:9090/targets
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751776740499/a24d7e9a-a2f5-4985-a433-2c0b7080fefa.png" alt class="image--center mx-auto" /></p>
<p>You will see Jenkins is added to it</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751776616975/dfaf0365-cb52-406c-9e2b-f7eebf1de5e0.png" alt class="image--center mx-auto" /></p>
<p>Let’s add Dashboard for a better view in Grafana</p>
<p>Click On Dashboard –&gt; + symbol –&gt; Import Dashboard</p>
<p>Use Id <code>9964</code> and click on load</p>
<p>Select the data source and click on Import</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751776634297/9b34c017-14b9-4868-b6bd-f344a08e9c67.png" alt class="image--center mx-auto" /></p>
<p>Now you will see the Detailed overview of Jenkins</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751776644318/fa11db88-d455-4763-a360-ed9a1d8a860f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-6-email-integration-with-jenkins-and-plugin-setup"><strong>Step 6 — Email Integration With Jenkins and Plugin Setup</strong></h3>
<p>Install <code>Email Extension Plugin</code> in Jenkins</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694164210365/4c78af7c-80a6-4442-b9cc-ee87b6956c1a.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Go to your Gmail and click on your profile</p>
<p>Then click on Manage Your Google Account –&gt; click on the security tab on the left side panel you will get this page(provide mail password).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694164383650/5f0522cd-a90d-4f8c-8490-bfbae7f2191f.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>2-step verification should be enabled.</p>
<p>Search for the app in the search bar you will get app passwords like the below image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694164475761/1c3e3229-acfd-4e97-825d-4bd8d8b316bb.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694164556725/f8a7bcdc-c65e-4c18-b6a2-d9d388594b9e.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Click on other and provide your name and click on Generate and copy the password</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694287596221/f9bae083-350a-446e-9b3e-63db2656fea8.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>In the new update, you will get a password like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696838555126/14ef8a94-c402-4538-ac96-5057adb09edf.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Once the plugin is installed in Jenkins, click on manage Jenkins –&gt; configure system there under the E-mail Notification section configure the details as shown in the below image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774721359/ac5891e4-19d0-423f-8594-1d7eeb63de49.png" alt class="image--center mx-auto" /></p>
<p>Click on Apply and save.</p>
<p>Click on Manage Jenkins–&gt; credentials and add your mail username and generated password</p>
<p>This is to just verify the mail configuration</p>
<p>Now under the Extended E-mail Notification section configure the details as shown in the below images</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694164940903/25104e50-fb02-4e8b-8c0d-f0c02f486975.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694164952910/0b6f0b17-238d-4d5d-ba57-dfe6818e8f09.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Click on Apply and save.</p>
<pre><code class="lang-bash">post {
     always {
        emailext attachLog: <span class="hljs-literal">true</span>,
            subject: <span class="hljs-string">"'<span class="hljs-variable">${currentBuild.result}</span>'"</span>,
            body: <span class="hljs-string">"Project: <span class="hljs-variable">${env.JOB_NAME}</span>&lt;br/&gt;"</span> +
                <span class="hljs-string">"Build Number: <span class="hljs-variable">${env.BUILD_NUMBER}</span>&lt;br/&gt;"</span> +
                <span class="hljs-string">"URL: <span class="hljs-variable">${env.BUILD_URL}</span>&lt;br/&gt;"</span>,
            to: <span class="hljs-string">'postbox.aj99@gmail.com'</span>,  <span class="hljs-comment">#change Your mail</span>
            attachmentsPattern: <span class="hljs-string">'trivyfs.txt,trivyimage.txt'</span>
        }
    }
</code></pre>
<p>Next, we will log in to Jenkins and start to configure our Pipeline in Jenkins</p>
<h3 id="heading-step-7-install-plugins-like-jdk-sonarqube-scanner-nodejs-owasp-dependency-check"><strong>Step 7 — Install Plugins like JDK, Sonarqube Scanner, NodeJs, OWASP Dependency Check</strong></h3>
<h3 id="heading-7a-install-plugin"><strong>7A — Install Plugin</strong></h3>
<p>Goto Manage Jenkins →Plugins → Available Plugins →</p>
<p>Install below plugins</p>
<p>1 → Eclipse Temurin Installer (Install without restart)</p>
<p>2 → SonarQube Scanner (Install without restart)</p>
<p>3 → NodeJs Plugin (Install Without restart)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694160106164/d829e70f-9a23-4d03-a427-887d779aa141.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695227031117/e7a88b82-e007-465f-8c7f-b911c0e5f658.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-7b-configure-java-and-nodejs-in-global-tool-configuration"><strong>7B — Configure Java and Nodejs in Global Tool Configuration</strong></h3>
<p>Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694160282666/7565037d-cf4f-4034-b55a-7028e580e3f8.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695403120569/3010e514-d64c-438a-85eb-d340ed5d3331.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-7c-create-a-job"><strong>7C — Create a Job</strong></h3>
<p>create a job as Netflix Name, select pipeline and click on ok.</p>
<h3 id="heading-step-8-configure-sonar-server-in-manage-jenkins"><strong>Step 8 — Configure Sonar Server in Manage Jenkins</strong></h3>
<p>Grab the Public IP Address of your EC2 Instance, Sonarqube works on Port 9000, so &lt;Public IP&gt;:9000. Goto your Sonarqube Server. Click on Administration → Security → Users → Click on Tokens and Update Token → Give it a name → and click on Generate Token</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694160792404/611ddf2a-9c2c-414a-ad3a-2ba84f8942ca.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>click on update Token</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694160866134/49f34dd2-de41-455c-aee0-f1ffe13d8488.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Create a token with a name and generate</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774831402/d8ad7dbc-a83a-40da-9909-4cbc6515516f.png" alt class="image--center mx-auto" /></p>
<p>copy Token</p>
<p>Goto Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694161147409/35b423f8-e7e4-402e-895b-cd1869b8a170.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>You will this page once you click on create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694161221410/9a7a6f3f-fd40-4bbb-8857-bf2ffb8e6895.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Now, go to Dashboard → Manage Jenkins → System and Add like the below image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694161318710/738bd6ea-c374-4c76-9234-5ec09cfe754f.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Click on Apply and Save</p>
<p><strong>The Configure System option</strong> is used in Jenkins to configure different server</p>
<p><strong>Global Tool Configuration</strong> is used to configure different tools that we install using Plugins</p>
<p>We will install a sonar scanner in the tools.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694161414550/efa6ef74-29e5-41c4-a81e-e6c528048c9f.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>In the Sonarqube Dashboard add a quality gate also</p>
<p>Administration–&gt; Configuration–&gt;Webhooks</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694242716931/c913bcc3-07c6-4e68-b73b-5ec04c63d4d6.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Click on Create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751774879175/548cbc93-f2cd-4303-a733-68c29f0ee550.png" alt class="image--center mx-auto" /></p>
<p>Add details</p>
<pre><code class="lang-bash"><span class="hljs-comment">#in url section of quality gate</span>
&lt;http://jenkins-public-ip:8080&gt;/sonarqube-webhook/
</code></pre>
<p>Let’s go to our Pipeline and add the script in our Pipeline Script.</p>
<pre><code class="lang-bash">pipeline{
    agent any
    tools{
        jdk <span class="hljs-string">'jdk17'</span>
        nodejs <span class="hljs-string">'node16'</span>
    }
    environment {
        SCANNER_HOME=tool <span class="hljs-string">'sonar-scanner'</span>
    }
    stages {
        stage(<span class="hljs-string">'clean workspace'</span>){
            steps{
                cleanWs()
            }
        }
        stage(<span class="hljs-string">'Checkout from Git'</span>){
            steps{
                git branch: <span class="hljs-string">'main'</span>, url: <span class="hljs-string">'https://github.com/gurjar-vishal/Netflix-clone.git'</span>
            }
        }
        stage(<span class="hljs-string">"Sonarqube Analysis "</span>){
            steps{
                withSonarQubeEnv(<span class="hljs-string">'sonar-server'</span>) {
                    sh <span class="hljs-string">''</span><span class="hljs-string">' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix '</span><span class="hljs-string">''</span>
                }
            }
        }
        stage(<span class="hljs-string">"quality gate"</span>){
           steps {
                script {
                    waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, credentialsId: <span class="hljs-string">'Sonar-token'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Install Dependencies'</span>) {
            steps {
                sh <span class="hljs-string">"npm install"</span>
            }
        }
    }
    post {
     always {
        emailext attachLog: <span class="hljs-literal">true</span>,
            subject: <span class="hljs-string">"'<span class="hljs-variable">${currentBuild.result}</span>'"</span>,
            body: <span class="hljs-string">"Project: <span class="hljs-variable">${env.JOB_NAME}</span>&lt;br/&gt;"</span> +
                <span class="hljs-string">"Build Number: <span class="hljs-variable">${env.BUILD_NUMBER}</span>&lt;br/&gt;"</span> +
                <span class="hljs-string">"URL: <span class="hljs-variable">${env.BUILD_URL}</span>&lt;br/&gt;"</span>,
            to: <span class="hljs-string">'vgahir099@gmail.com'</span>,
            attachmentsPattern: <span class="hljs-string">'trivyfs.txt,trivyimage.txt'</span>
        }
    }
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775094508/6d53166e-12ec-4588-a70d-ed9464d6c150.png" alt class="image--center mx-auto" /></p>
<p>Click on Build now, you will see the stage view like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775123184/957e92af-23c0-4563-9c4d-5d52d60aef61.png" alt class="image--center mx-auto" /></p>
<p>To see the report, you can go to Sonarqube Server and go to Projects.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696677877388/0f010d54-7f00-4d20-ba17-b28a4ce50e47.png?auto=compress,format&amp;format=webp" alt /></p>
<p>You can see the report has been generated and the status shows as passed. You can see that there are 3.2k lines it scanned. To see a detailed report, you can go to issues.</p>
<h3 id="heading-step-9-install-owasp-dependency-check-plugins"><strong>Step 9 — Install OWASP Dependency Check Plugins</strong></h3>
<p>GotoDashboard → Manage Jenkins → Plugins → OWASP Dependency-Check. Click on it and install it without restart.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775155284/96d4d634-98c9-433b-bed5-8aef7b2a6c44.png" alt class="image--center mx-auto" /></p>
<p>First, we configured the Plugin and next, we had to configure the Tool</p>
<p>Goto Dashboard → Manage Jenkins → Tools →</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694162653334/158f6b94-7c92-4556-9151-6213a003b431.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Click on Apply and Save here.</p>
<p>Now go configure → Pipeline and add this stage to your pipeline and build.</p>
<pre><code class="lang-bash">stage(<span class="hljs-string">'OWASP FS SCAN'</span>) {
            steps {
                dependencyCheck additionalArguments: <span class="hljs-string">'--scan ./ --disableYarnAudit --disableNodeAudit'</span>, odcInstallation: <span class="hljs-string">'DP-Check'</span>
                dependencyCheckPublisher pattern: <span class="hljs-string">'**/dependency-check-report.xml'</span>
            }
        }
        stage(<span class="hljs-string">'TRIVY FS SCAN'</span>) {
            steps {
                sh <span class="hljs-string">"trivy fs . &gt; trivyfs.txt"</span>
            }
        }
</code></pre>
<p>The stage view would look like this,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775169116/6beed8a4-30d3-4528-8fa6-77a680ee32bf.png" alt class="image--center mx-auto" /></p>
<p>You will see that in status, a graph will also be generated and Vulnerabilities.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695231385639/64d49427-ce7b-4723-a432-faf02dbca838.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-10-docker-image-build-and-push"><strong>Step 10 — Docker Image Build and Push</strong></h3>
<p>We need to install the Docker tool in our system, Goto Dashboard → Manage Plugins → Available plugins → Search for Docker and install these plugins</p>
<p><code>Docker</code></p>
<p><code>Docker Commons</code></p>
<p><code>Docker Pipeline</code></p>
<p><code>Docker API</code></p>
<p><code>docker-build-step</code></p>
<p>and click on install without restart</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775180534/66b9c1b4-363a-4a4c-a949-643691baee59.png" alt class="image--center mx-auto" /></p>
<p>Now, goto Dashboard → Manage Jenkins → Tools →</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694163030620/59df4527-29d4-41df-8dcd-501e04fda7eb.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Add DockerHub Username and Password under Global Credentials</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694163085161/2dfb909a-61a8-4122-9292-87a2742bb8d9.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>Add this stage to Pipeline Script</p>
<pre><code class="lang-bash">stage(<span class="hljs-string">"Docker Build &amp; Push"</span>){
            steps{
                script{
                   withDockerRegistry(credentialsId: <span class="hljs-string">'docker'</span>, toolName: <span class="hljs-string">'docker'</span>){
                       sh <span class="hljs-string">"docker build --build-arg TMDB_V3_API_KEY=6af817c812a4a8481f914c627b9ba292 -t netflix ."</span>
                       sh <span class="hljs-string">"docker tag netflix vgahir/netflix:latest "</span>
                       sh <span class="hljs-string">"docker push vgahir/netflix:latest "</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">"TRIVY"</span>){
            steps{
                sh <span class="hljs-string">"trivy image vgahir/netflix:latest &gt; trivyimage.txt"</span>
            }
        }
</code></pre>
<p>You will see the output below, with a dependency trend.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775344154/2521f74b-273d-457e-ae5f-2394ad51829b.png" alt class="image--center mx-auto" /></p>
<p>When you log in to Dockerhub, you will see a new image is created</p>
<p>Now Run the container to see if the game coming up or not by adding the below stage</p>
<pre><code class="lang-bash">stage(<span class="hljs-string">'Deploy to container'</span>){
            steps{
                sh <span class="hljs-string">'docker run -d --name netflix -p 8081:80 vgahir/netflix:latest'</span>
            }
        }
</code></pre>
<p>stage view</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775374555/6c084254-826b-417b-a464-25e5c7e919e4.png" alt class="image--center mx-auto" /></p>
<p><code>&lt;Jenkins-public-ip:8081&gt;</code></p>
<p>You will get this output</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775382376/4d0ac655-2d7f-41bf-8e30-c2fd64504f1c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-11-kuberenetes-setup"><strong>Step 11 — Kuberenetes Setup</strong></h3>
<p>Connect your machines to Putty or Mobaxtreme</p>
<p><strong>Take-Two Ubuntu 20.04 instances one for k8s master and the other one for worker.</strong></p>
<p>Install Kubectl on Jenkins machine also.</p>
<h3 id="heading-kubectl-is-to-be-installed-on-jenkins-also"><strong>Kubectl is to be installed on Jenkins also</strong></h3>
<pre><code class="lang-bash">sudo apt update
sudo apt install curl
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/<span class="hljs-built_in">local</span>/bin/kubectl
kubectl version --client
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775421941/d25c8745-de4e-4676-bbac-65a34296404a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775478649/5332e3cf-5b12-4540-bc06-6f0e756355bb.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-part-1-master-node"><strong>Part 1 ———-Master Node————</strong></h3>
<pre><code class="lang-bash">sudo hostnamectl set-hostname K8s-Master
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775555298/1a2e7e62-a000-4a1a-ba6e-d4d0318dc149.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-worker-node"><strong>———-Worker Node————</strong></h3>
<pre><code class="lang-bash">sudo hostnamectl set-hostname K8s-Worker
</code></pre>
<h3 id="heading-part-2-both-master-amp-node"><strong>Part 2 ————Both Master &amp; Node ————</strong></h3>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get install -y docker.io
sudo usermod –aG docker Ubuntu
newgrp docker
sudo chmod 777 /var/run/docker.sock
</code></pre>
<h3 id="heading-part-3-master"><strong>Part 3 ————— Master —————</strong></h3>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
<span class="hljs-built_in">echo</span> <span class="hljs-string">'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /'</span> | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
</code></pre>
<h3 id="heading-worker-node-1"><strong>———-Worker Node————</strong></h3>
<pre><code class="lang-bash">sudo kubeadm init --pod-network-cidr=10.244.0.0/16
<span class="hljs-comment"># in case your in root exit from it and run below commands</span>
mkdir -p <span class="hljs-variable">$HOME</span>/.kube
sudo cp -i /etc/kubernetes/admin.conf <span class="hljs-variable">$HOME</span>/.kube/config
sudo chown $(id -u):$(id -g) <span class="hljs-variable">$HOME</span>/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<p>Copy the config file to Jenkins master or the local file manager and save it</p>
<p>copy it and save it in documents or another folder save it as secret-file.txt</p>
<p>Note: create a secret-file.txt in your file explorer save the config in it and use this at the kubernetes credential section.</p>
<p>Install Kubernetes Plugin, Once it’s installed successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694163896514/5edda1d6-b9a3-45c2-9bcf-e677939191cf.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>goto manage Jenkins –&gt; manage credentials –&gt; Click on Jenkins global –&gt; add credentials</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694163948237/0e01b94e-c1e4-4fd2-8f5d-730df240a5b5.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-install-nodeexporter-on-both-master-and-worker"><strong>Install Node_exporter on both master and worker</strong></h3>
<p>Let’s add Node_exporter on Master and Worker to monitor the metrics</p>
<p>First, let’s create a system user for Node Exporter by running the following command:</p>
<pre><code class="lang-bash">sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/<span class="hljs-literal">false</span> node_exporter
</code></pre>
<p>You can <a target="_blank" href="https://prometheus.io/download/"><strong>download Node Exporter</strong></a> from the same page.</p>
<p>Use the wget command to download the binary.</p>
<pre><code class="lang-bash">wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
</code></pre>
<p>Extract the node exporter from the archive.</p>
<pre><code class="lang-bash">tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
</code></pre>
<p>Move binary to the /usr/local/bin.</p>
<pre><code class="lang-bash">sudo mv \
  node_exporter-1.6.1.linux-amd64/node_exporter \
  /usr/<span class="hljs-built_in">local</span>/bin/
</code></pre>
<p>Clean up, and delete node_exporter archive and a folder.</p>
<pre><code class="lang-bash">rm -rf node_exporter*
</code></pre>
<p>Verify that you can run the binary.</p>
<pre><code class="lang-bash">node_exporter --version
</code></pre>
<p>Node Exporter has a lot of plugins that we can enable. If you run Node Exporter help you will get all the options.</p>
<pre><code class="lang-bash">node_exporter --<span class="hljs-built_in">help</span>
</code></pre>
<p>–collector.logind We’re going to enable the login controller, just for the demo.</p>
<p>Next, create a similar systemd unit file.</p>
<pre><code class="lang-bash">sudo vim /etc/systemd/system/node_exporter.service
</code></pre>
<p><strong>node_exporter.service</strong></p>
<pre><code class="lang-bash">[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/<span class="hljs-built_in">local</span>/bin/node_exporter \
    --collector.logind
[Install]
WantedBy=multi-user.target
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696609165286/5264f204-d2f4-4f24-a88f-bb3e871fc863.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Replace Prometheus user and group to node_exporter, and update the ExecStart command.</p>
<p>To automatically start the Node Exporter after reboot, enable the service.</p>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> node_exporter
</code></pre>
<p>Then start the Node Exporter.</p>
<pre><code class="lang-bash">sudo systemctl start node_exporter
</code></pre>
<p>Check the status of Node Exporter with the following command:</p>
<pre><code class="lang-bash">sudo systemctl status node_exporter
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775611697/c972a6f8-242d-4581-ae3d-e1fe40c6ebe9.png" alt class="image--center mx-auto" /></p>
<p>If you have any issues, check logs with journalctl</p>
<pre><code class="lang-bash">journalctl -u node_exporter -f --no-pager
</code></pre>
<p>At this point, we have only a single target in our Prometheus. There are many different service discovery mechanisms built into Prometheus. For example, Prometheus can dynamically discover targets in AWS, GCP, and other clouds based on the labels. In the following tutorials, I’ll give you a few examples of deploying Prometheus in a cloud-specific environment. For this tutorial, let’s keep it simple and keep adding static targets. Also, I have a lesson on how to deploy and manage Prometheus in the Kubernetes cluster.</p>
<p>To create a static target, you need to add job_name with static_configs. Go to Prometheus server</p>
<pre><code class="lang-bash">sudo vim /etc/prometheus/prometheus.yml
</code></pre>
<p><strong>prometheus.yml</strong></p>
<pre><code class="lang-bash">- job_name: node_export_masterk8s
    static_configs:
      - targets: [<span class="hljs-string">"&lt;master-ip&gt;:9100"</span>]
  - job_name: node_export_workerk8s
    static_configs:
      - targets: [<span class="hljs-string">"&lt;worker-ip&gt;:9100"</span>]
</code></pre>
<p>By default, Node Exporter will be exposed on port 9100.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696838151827/7e21e323-e1fb-48ae-bcb2-67d79ddafa25.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Since we enabled lifecycle management via API calls, we can reload the Prometheus config without restarting the service and causing downtime.</p>
<p>Before, restarting check if the config is valid.</p>
<pre><code class="lang-bash">promtool check config /etc/prometheus/prometheus.yml
</code></pre>
<p>Then, you can use a POST request to reload the config.</p>
<pre><code class="lang-bash">curl -X POST http://localhost:9090/-/reload
</code></pre>
<p>Check the targets section</p>
<pre><code class="lang-bash">http://&lt;ip&gt;:9090/targets
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775669860/29cd2189-fb7a-41d0-823e-e0e4678dcdd2.png" alt class="image--center mx-auto" /></p>
<p>final step to deploy on the Kubernetes cluster</p>
<pre><code class="lang-bash">stage(<span class="hljs-string">'Deploy to kubernets'</span>){
            steps{
                script{
                    dir(<span class="hljs-string">'Kubernetes'</span>) {
                        withKubeConfig(caCertificate: <span class="hljs-string">''</span>, clusterName: <span class="hljs-string">''</span>, contextName: <span class="hljs-string">''</span>, credentialsId: <span class="hljs-string">'k8s'</span>, namespace: <span class="hljs-string">''</span>, restrictKubeConfigAccess: <span class="hljs-literal">false</span>, serverUrl: <span class="hljs-string">''</span>) {
                                sh <span class="hljs-string">'kubectl apply -f deployment.yml'</span>
                                sh <span class="hljs-string">'kubectl apply -f service.yml'</span>
                        }
                    }
                }
            }
        }
</code></pre>
<p>stage view</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775692852/e7288ecb-bdf8-4782-9e47-3076576835e7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775735529/67db5e4f-4056-493d-934c-178b03e62876.png" alt class="image--center mx-auto" /></p>
<p>In the Kubernetes cluster(master) give this command</p>
<pre><code class="lang-bash">kubectl get all 
kubectl get svc
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694165240130/0cec6712-24b4-4715-8143-2083a367016e.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-12access-from-a-web-browser-with"><strong>STEP 12:Access from a Web browser with</strong></h3>
<p><code>&lt;public-ip-of-slave:service port&gt;</code></p>
<p>output:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751775756226/4465a2d4-65e0-47dc-8a99-2f397fd75403.png" alt class="image--center mx-auto" /></p>
<p>Monitoring</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696838416465/558b691b-fa2d-402c-9325-d2c8aa586e03.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696838310881/390e5e52-4357-41ec-80de-1c76dc29d15a.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696838316614/0156c7ca-e7d8-4a20-9ab4-3b1ce2fef07d.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step-13-terminate-instances"><strong>Step 13: Terminate instances.</strong></h3>
<h3 id="heading-complete-pipeline"><strong>Complete Pipeline</strong></h3>
<pre><code class="lang-bash">pipeline{
    agent any
    tools{
        jdk <span class="hljs-string">'jdk17'</span>
        nodejs <span class="hljs-string">'node16'</span>
    }
    environment {
        SCANNER_HOME=tool <span class="hljs-string">'sonar-scanner'</span>
    }
    stages {
        stage(<span class="hljs-string">'clean workspace'</span>){
            steps{
                cleanWs()
            }
        }
        stage(<span class="hljs-string">'Checkout from Git'</span>){
            steps{
                git branch: <span class="hljs-string">'main'</span>, url: <span class="hljs-string">'https://github.com/gurjar-vishal/Netflix-clone.git'</span>
            }
        }
        stage(<span class="hljs-string">"Sonarqube Analysis "</span>){
            steps{
                withSonarQubeEnv(<span class="hljs-string">'sonar-server'</span>) {
                    sh <span class="hljs-string">''</span><span class="hljs-string">' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix '</span><span class="hljs-string">''</span>
                }
            }
        }
        stage(<span class="hljs-string">"quality gate"</span>){
           steps {
                script {
                    waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, credentialsId: <span class="hljs-string">'Sonar-token'</span> 
                }
            } 
        }
        stage(<span class="hljs-string">'Install Dependencies'</span>) {
            steps {
                sh <span class="hljs-string">"npm install"</span>
            }
        }
        stage(<span class="hljs-string">'OWASP FS SCAN'</span>) {
            steps {
                dependencyCheck additionalArguments: <span class="hljs-string">'--scan ./ --disableYarnAudit --disableNodeAudit'</span>, odcInstallation: <span class="hljs-string">'DP-Check'</span>
                dependencyCheckPublisher pattern: <span class="hljs-string">'**/dependency-check-report.xml'</span>
            }
        }
        stage(<span class="hljs-string">'TRIVY FS SCAN'</span>) {
            steps {
                sh <span class="hljs-string">"trivy fs . &gt; trivyfs.txt"</span>
            }
        }
        stage(<span class="hljs-string">"Docker Build &amp; Push"</span>){
            steps{
                script{
                   withDockerRegistry(credentialsId: <span class="hljs-string">'docker'</span>, toolName: <span class="hljs-string">'docker'</span>){   
                       sh <span class="hljs-string">"docker build --build-arg TMDB_V3_API_KEY=6af817c812a4a8481f914c627b9ba292 -t netflix ."</span>
                       sh <span class="hljs-string">"docker tag netflix vgahir/netflix:latest "</span>
                       sh <span class="hljs-string">"docker push vgahir/netflix:latest "</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">"TRIVY"</span>){
            steps{
                sh <span class="hljs-string">"trivy image vgahir/netflix:latest &gt; trivyimage.txt"</span> 
            }
        }
        stage(<span class="hljs-string">'Deploy to container'</span>){
            steps{
                sh <span class="hljs-string">'docker run -d --name netflix -p 8081:80 vgahir/netflix:latest'</span>
            }
        }
        stage(<span class="hljs-string">'Deploy to kubernets'</span>){
            steps{
                script{
                    dir(<span class="hljs-string">'Kubernetes'</span>) {
                        withKubeConfig(caCertificate: <span class="hljs-string">''</span>, clusterName: <span class="hljs-string">''</span>, contextName: <span class="hljs-string">''</span>, credentialsId: <span class="hljs-string">'k8s'</span>, namespace: <span class="hljs-string">''</span>, restrictKubeConfigAccess: <span class="hljs-literal">false</span>, serverUrl: <span class="hljs-string">''</span>) {
                                sh <span class="hljs-string">'kubectl apply -f deployment.yml'</span>
                                sh <span class="hljs-string">'kubectl apply -f service.yml'</span>
                        }   
                    }
                }
            }
        }

    }
    post {
     always {
        emailext attachLog: <span class="hljs-literal">true</span>,
            subject: <span class="hljs-string">"'<span class="hljs-variable">${currentBuild.result}</span>'"</span>,
            body: <span class="hljs-string">"Project: <span class="hljs-variable">${env.JOB_NAME}</span>&lt;br/&gt;"</span> +
                <span class="hljs-string">"Build Number: <span class="hljs-variable">${env.BUILD_NUMBER}</span>&lt;br/&gt;"</span> +
                <span class="hljs-string">"URL: <span class="hljs-variable">${env.BUILD_URL}</span>&lt;br/&gt;"</span>,
            to: <span class="hljs-string">'vgahir099@gmail.com'</span>,
            attachmentsPattern: <span class="hljs-string">'trivyfs.txt,trivyimage.txt'</span>
        }
    }
}
</code></pre>
<p>Hope you found this helpful. Do connect/ follow for more such content.</p>
<p>~GURJAR VISHAL</p>
]]></content:encoded></item><item><title><![CDATA[SSH Key Generation on AWS EC2]]></title><description><![CDATA[Task: Generate a New SSH Key on an AWS EC2
Instance Instance OS: Amazon Linux
Task Objective: Generate a new RSA SSH key pair inside the EC2 instance
Step-by-Step Commands:

SSH into the EC2 instance


ssh -i your-key.pem ec2-user@<ec2-public-ip>


V...]]></description><link>https://gurjar-vishal.me/ssh-key-generation-on-aws-ec2</link><guid isPermaLink="true">https://gurjar-vishal.me/ssh-key-generation-on-aws-ec2</guid><category><![CDATA[AWS]]></category><category><![CDATA[ssh-keys]]></category><category><![CDATA[aws ec2]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Thu, 26 Jun 2025 05:05:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750914946522/a5fd746d-c837-43a0-947f-e8fdb0c8a0cc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-task-generate-a-new-ssh-key-on-an-aws-ec2"><strong>Task: Generate a New SSH Key on an AWS EC2</strong></h2>
<p>Instance Instance OS: Amazon Linux</p>
<p>Task Objective: Generate a new RSA SSH key pair inside the EC2 instance</p>
<p>Step-by-Step Commands:</p>
<ol>
<li><h2 id="heading-ssh-into-the-ec2-instance"><strong>SSH into the EC2 instance</strong></h2>
</li>
</ol>
<pre><code class="lang-bash">ssh -i your-key.pem ec2-user@&lt;ec2-public-ip&gt;
</code></pre>
<ol start="2">
<li><h2 id="heading-verify-you-are-inside-the-ec2"><strong>Verify you are inside the EC2</strong></h2>
</li>
</ol>
<pre><code class="lang-bash"> whoami
</code></pre>
<p>Output: <code>ec2-user</code></p>
<ol start="3">
<li><h2 id="heading-generate-a-new-ssh-key-pair"><strong>Generate a new SSH key pair</strong></h2>
</li>
</ol>
<pre><code class="lang-bash">ssh-keygen -t rsa -b 4096 -C <span class="hljs-string">"new-key-inside-ec2"</span>
</code></pre>
<p>When prompted:</p>
<p><code>- Press Enter to accept default file location   - Press Enter twice for no pass</code></p>
<h2 id="heading-sample-output"><strong>Sample Output:</strong></h2>
<p>❖ Generating public/private rsa key pair.<br />❖ Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa): [Press Enter]<br />❖ Enter passphrase (empty for no passphrase): [Press Enter]<br />❖ Enter same passphrase again: [Press Enter]<br />❖ Your identification has been saved in /home/ec2-user/.ssh/id_rsa.<br />❖ Your public key has been saved in /home/ec2-user/.ssh/id_<a target="_blank" href="http://rsa.pub">rsa.pub</a>.</p>
<ol start="4">
<li><h2 id="heading-list-and-verify-ssh-key-files"><strong>List and verify SSH key files</strong></h2>
</li>
</ol>
<pre><code class="lang-bash">ls -l ~/.ssh/
</code></pre>
<h3 id="heading-expected-output-rw-1-ec2-user-ec2-user-3243-jun-26-0915-idrsa"><strong>Expected Output:</strong> <code>-rw------- 1 ec2-user ec2-user 3243 Jun 26 09:15 id_rsa</code></h3>
<p><code>-rw-r--r-- 1 ec2-user ec2-user 743 Jun 26 09:15 id_</code><a target="_blank" href="http://rsa.pub"><code>rsa.pub</code></a></p>
<ol start="5">
<li><h2 id="heading-optional-view-the-public-key"><strong>(Optional) View the Public Key</strong></h2>
</li>
</ol>
<pre><code class="lang-bash"> cat ~/.ssh/id_rsa.pub
</code></pre>
<h2 id="heading-task-status-completed-successfully"><strong>Task Status: Completed Successfully</strong></h2>
<h3 id="heading-new-ssh-key-pair-generated-at"><strong>New SSH key pair generated at:</strong></h3>
<p>Private key: <code>/home/ec2-user/.ssh/id_rsa</code></p>
<p>Public key: <code>/home/ec2-user/.ssh/id_</code><a target="_blank" href="http://rsa.pub"><code>rsa.pub</code></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750913811886/d2c77102-496d-4753-80f1-ceb1a999e0c5.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[In-Depth Guide to Robot Shop's 3-Tier Architecture and Key Services]]></title><description><![CDATA[Today, let’s embark on a journey of deploying Stan’s Robot Shop, an educational microservices application. This sandbox environment serves as an excellent playground to delve into the realm of containerized applications, exploring deployment methodol...]]></description><link>https://gurjar-vishal.me/robot-shop</link><guid isPermaLink="true">https://gurjar-vishal.me/robot-shop</guid><category><![CDATA[robot-shop]]></category><category><![CDATA[gravix]]></category><category><![CDATA[gravix-devops]]></category><category><![CDATA[gurjar-vishal]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[GURJAR VISHAL]]></dc:creator><pubDate>Tue, 24 Jun 2025 05:14:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750737051631/0dd84684-61a9-4b2e-a59d-a3187767262d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, let’s embark on a journey of deploying Stan’s Robot Shop, an educational microservices application. This sandbox environment serves as an excellent playground to delve into the realm of containerized applications, exploring deployment methodologies in a practical manner.</p>
<h3 id="heading-step1-create-iam-user-in-aws"><strong>STEP1: CREATE IAM USER IN AWS</strong></h3>
<p>Go to Aws console and login with your credentials</p>
<p>IN Search bar TYPE IAM</p>
<p>This is IAM Dashboard</p>
<p>Click on Users and click on Create User</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750737280009/a6a4d35c-6d42-4dbc-8555-348229acd76e.png" alt class="image--center mx-auto" /></p>
<p>When creating an IAM user in AWS, generate an access key for programmatic access. This key includes an Access Key ID and a Secret Access Key, which should be stored securely.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703940528468/dd7056ce-c459-4766-8b8e-2d90b4ab8f38.png" alt /></p>
<h3 id="heading-step2-create-ec2-instance"><strong>STEP2: Create EC2 Instance</strong></h3>
<ol>
<li><p><strong>Sign in to AWS Console:</strong> Log in to your AWS Management Console.</p>
</li>
<li><p><strong>Navigate to EC2 Dashboard:</strong> Go to the EC2 Dashboard by selecting “Services” in the top menu and then choosing “EC2” under the Compute section.</p>
</li>
<li><p><strong>Launch Instance:</strong> Click on the “Launch Instance” button to start the instance creation process.</p>
</li>
<li><p><strong>Choose an Amazon Machine Image (AMI):</strong> Select an appropriate AMI for your instance. For example, you can choose Ubuntu image.</p>
</li>
<li><p><strong>Choose an Instance Type:</strong> In the “Choose Instance Type” step, select <code>t2.medium</code> as your instance type. Proceed by clicking “Next: Configure Instance Details.”</p>
</li>
<li><p><strong>Configure Instance Details:</strong></p>
<ul>
<li><p>For “Number of Instances,” set it to 1 (unless you need multiple instances).</p>
</li>
<li><p>Configure additional settings like network, subnets, IAM role, etc., if necessary.</p>
</li>
<li><p>For “Storage,” click “Add New Volume” and set the size to 8GB (or modify the existing storage to 16GB).</p>
</li>
<li><p>Click “Next: Add Tags” when you’re done.</p>
</li>
</ul>
</li>
<li><p><strong>Add Tags (Optional):</strong> Add any desired tags to your instance. This step is optional, but it helps in organizing instances.</p>
</li>
<li><p><strong>Configure Security Group:</strong></p>
<ul>
<li><p>Choose an existing security group or create a new one.</p>
</li>
<li><p>Ensure the security group has the necessary inbound/outbound rules to allow access as required.</p>
</li>
</ul>
</li>
<li><p><strong>Review and Launch:</strong> Review the configuration details. Ensure everything is set as desired.</p>
</li>
<li><p><strong>Select Key Pair:</strong></p>
<ul>
<li><p>Select “Choose an existing key pair” and choose the key pair from the dropdown.</p>
</li>
<li><p>Acknowledge that you have access to the selected private key file.</p>
</li>
<li><p>Click “Launch Instances” to create the instance.</p>
</li>
</ul>
</li>
<li><p><strong>Access the EC2 Instance:</strong> Once the instance is launched, you can access it using the key pair and the instance’s public IP or DNS.</p>
</li>
</ol>
<p>Ensure you have necessary permissions and follow best practices while configuring security groups and key pairs to maintain security for your EC2 instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750737547883/1becf731-9b9b-441a-88ba-f293b541472e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step3-connect-to-instance-and-install-required-packages"><strong>Step3: Connect to Instance and Install Required Packages</strong></h3>
<p>Eksctl</p>
<pre><code class="lang-bash">sudo apt update
curl --silent --location <span class="hljs-string">"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_<span class="hljs-subst">$(uname -s)</span>_amd64.tar.gz"</span> | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/<span class="hljs-built_in">local</span>/bin
eksctl version
</code></pre>
<p>Kubectl</p>
<pre><code class="lang-bash">curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.3/2023-11-14/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p <span class="hljs-variable">$HOME</span>/bin &amp;&amp; cp ./kubectl <span class="hljs-variable">$HOME</span>/bin/kubectl &amp;&amp; <span class="hljs-built_in">export</span> PATH=<span class="hljs-variable">$HOME</span>/bin:<span class="hljs-variable">$PATH</span>
kubectl version --client
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703941160035/ee572bfc-ed2e-46b1-9bed-f1feda5cd8e5.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Aws CLI</p>
<pre><code class="lang-bash">sudo apt install unzip -y
curl <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> -o <span class="hljs-string">"awscliv2.zip"</span>
unzip awscliv2.zip
sudo ./aws/install
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703941329577/c6b79a67-4e92-4792-9a3b-d2ef262db8c4.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703941358091/86683747-a6e9-47a7-9ec0-38b7b9b2e01e.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Helm</p>
<pre><code class="lang-bash">curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703941537872/f32d83d9-c89c-4de7-8ed6-4a94feddc0f7.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-step4-eks-setup"><strong>STEP4: EKS Setup</strong></h3>
<p>Aws configure ( Use us-east-1 region please )</p>
<pre><code class="lang-bash">aws configure
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703941806285/a25da252-4fc3-4857-a156-d6168c140232.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Lets clone GitHub repo</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/gurjar-vishal/3Tier-Robot-shop.git
<span class="hljs-built_in">cd</span> 3Tier-Robot-shop
</code></pre>
<p>Create cluster</p>
<pre><code class="lang-bash">eksctl create cluster --name demo-cluster-three-tier-1 --region us-east-1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750739653650/1cab8cd6-b689-40eb-87f1-dd02bf85d8ee.png" alt class="image--center mx-auto" /></p>
<p>Now Setup</p>
<h3 id="heading-commands-to-configure-iam-oidc-provider"><strong>Commands to configure IAM OIDC provider</strong></h3>
<p><mark>USE CLUSTER NAME </mark> <code>demo-cluster-three-tier-1</code></p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> cluster_name=&lt;CLUSTER-NAME&gt;
</code></pre>
<p>The command “export cluster_name=” is used in a computer’s command-line interface to create a named storage space (variable) that holds a specific value. It’s like giving a name to something so you can use it later. In this case, it’s creating a storage space called “cluster_name” and putting a value in it, which represents the name of a cluster. This helps remember and use the cluster’s name in other commands or programs without typing it repeatedly.</p>
<pre><code class="lang-bash">oidc_id=$(aws eks describe-cluster --name <span class="hljs-variable">$cluster_name</span> --query <span class="hljs-string">"cluster.identity.oidc.issuer"</span> --output text | cut -d <span class="hljs-string">'/'</span> -f 5)
</code></pre>
<p>This command uses the AWS CLI (Command Line Interface) to extract a specific piece of information about an Amazon EKS (Elastic Kubernetes Service) cluster</p>
<h3 id="heading-check-if-there-is-an-iam-oidc-provider-configured-already"><strong>Check if there is an IAM OIDC provider configured already</strong></h3>
<pre><code class="lang-bash">aws iam list-open-id-connect-providers | grep <span class="hljs-variable">$oidc_id</span> | cut -d <span class="hljs-string">"/"</span> -f4
</code></pre>
<p>This command utilizes the AWS CLI (Command Line Interface) to list OpenID Connect (OIDC) providers in your AWS Identity and Access Management (IAM) and extract specific information</p>
<pre><code class="lang-bash">eksctl utils associate-iam-oidc-provider --cluster <span class="hljs-variable">$cluster_name</span> --approve
</code></pre>
<p>EKSCTL command used to associate the IAM OIDC provider with an Amazon EKS (Elastic Kubernetes Service) cluster.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750740147322/95a36dac-a38d-4687-9447-f21346138f82.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-setup-alb-add-on"><strong>Setup alb add on</strong></h3>
<p>Download IAM policy</p>
<pre><code class="lang-bash">curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703942797199/b3687a96-97fd-4de7-a079-fd366c02f370.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Create IAM Policy</p>
<pre><code class="lang-bash">aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703942847274/add504c1-1acc-4206-b95a-bff57e69fdd5.png?auto=compress,format&amp;format=webp" alt /></p>
<p>create IAM role</p>
<p><mark>Please Add cluster name and Aws account ID</mark></p>
<pre><code class="lang-bash">eksctl create iamserviceaccount \
  --cluster=&lt;your-cluster-name&gt; \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --attach-policy-arn=arn:aws:iam::&lt;your-aws-account-id&gt;:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve
</code></pre>
<p>To get aws account id</p>
<p>Go to aws console and click on Right side profile name and copy it</p>
<h3 id="heading-deploy-alb-controller"><strong>Deploy ALB controller</strong></h3>
<p>Add helm repo</p>
<pre><code class="lang-bash">helm repo add eks https://aws.github.io/eks-charts
</code></pre>
<p>Update the repo</p>
<pre><code class="lang-bash">helm repo update eks
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703943416053/73441c6d-9337-48e3-acdc-603efd13d72b.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Install</p>
<p><mark>please update VPC_ID in this command</mark></p>
<p>go to eks and copy vpc id</p>
<pre><code class="lang-bash">helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --<span class="hljs-built_in">set</span> clusterName=demo-cluster-three-tier-1 --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">false</span> --<span class="hljs-built_in">set</span> serviceAccount.name=aws-load-balancer-controller --<span class="hljs-built_in">set</span> region=us-east-1 --<span class="hljs-built_in">set</span> vpcId=&lt;vpc-id&gt;
</code></pre>
<p>Verify that the deployments are running</p>
<pre><code class="lang-bash">kubectl get deployment -n kube-system aws-load-balancer-controller
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703943709891/47004e47-b877-4334-a8bc-2ffe0c9f1b2d.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-ebs-csi-plugin-configuration"><strong>EBS CSI Plugin configuration</strong></h3>
<p>The Amazon EBS CSI plugin requires IAM permissions to make calls to AWS APIs on your behalf.</p>
<p>Create an IAM role and attach a policy. AWS maintains an AWS managed policy or you can create your own custom policy. You can create an IAM role and attach the AWS managed policy with the following command. Replace my-cluster with the name of your cluster. The command deploys an AWS CloudFormation stack that creates an IAM role and attaches the IAM policy to it.</p>
<p><mark>Please add Cluster name</mark></p>
<pre><code class="lang-bash">eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace kube-system \
    --cluster &lt;YOUR-CLUSTER-NAME&gt; \
    --role-name AmazonEKS_EBS_CSI_DriverRole \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
    --approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750740740258/eb349cac-2456-4aae-b440-586fbdae22ba.png" alt class="image--center mx-auto" /></p>
<p>Run the following command. <mark>Replace with the name of your cluster, with your account ID.</mark></p>
<pre><code class="lang-bash">eksctl create addon --name aws-ebs-csi-driver --cluster &lt;YOUR-CLUSTER-NAME&gt; --service-account-role-arn arn:aws:iam::&lt;AWS-ACCOUNT-ID&gt;:role/AmazonEKS_EBS_CSI_DriverRole --force
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703944052130/fe318dfe-9d1f-4435-ac5a-6b923ecff47e.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now Go inside the helm and create a namespace</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> helm
kubectl create ns robot-shop
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703944119329/e7e2ca96-1926-4b1e-871b-290eb20d27cf.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now</p>
<pre><code class="lang-bash">helm install robot-shop --namespace robot-shop .
</code></pre>
<p>Now check pods</p>
<pre><code class="lang-bash">kubectl get pods -n robot-shop
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750741039989/235c781d-a06a-4e05-a042-bbf3b86d9d04.png" alt class="image--center mx-auto" /></p>
<p>Check service</p>
<pre><code class="lang-bash">kubectl get svc -n robot-shop
</code></pre>
<p>Now Apply ingress</p>
<pre><code class="lang-bash">kubectl apply -f ingress.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703944457647/b3cc0d59-afe9-4f04-a7cc-a0e8775f94d1.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now go to AWS CONSOLE<br />search for Ec2 and Go to load balancers</p>
<p>COPY DNS</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750741222855/d39e96c8-b39c-499a-8bd5-97a9687a924e.png" alt class="image--center mx-auto" /></p>
<p>Open a new tab and paste</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750741400392/91fcf983-00eb-439f-8979-d3bb54125f0d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750741521310/4f142fab-2a69-468d-a141-771cfc5b2261.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750741524359/41c17799-aeeb-4053-b78e-bc04e617b786.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step5-delete-cluster"><strong>STEP5: DELETE CLUSTER</strong></h3>
<p>JUST PROVIDE THIS COMMAND</p>
<pre><code class="lang-bash">eksctl delete cluster --name demo-cluster-three-tier-1 --region us-east-1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703944847927/9b11ca1c-29b9-4b0e-8659-7c097912317a.png?auto=compress,format&amp;format=webp" alt /></p>
<p>In conclusion, our journey through the deployment and configuration of Stan’s Robot Shop, a versatile microservices application, has been an enlightening exploration into the world of containerized applications, orchestration, and monitoring.</p>
<p>Throughout this guide, we’ve covered a range of essential steps, from deploying the application using Docker Compose to associating IAM OIDC providers with Amazon EKS clusters, unlocking the potential for secure access to AWS resources through Kubernetes service accounts.</p>
<p>Stan’s Robot Shop serves not only as a sandbox for experimenting with diverse technologies like NodeJS, Java, Python, and more but also as a practical learning ground for understanding orchestration tools like Kubernetes and monitoring solutions like Instant.</p>
<p>As you continue to delve into the intricacies of microservices architectures, container orchestration, and monitoring practices, remember that Stan’s Robot Shop is an ideal starting point—a playground where you can further explore, test, and refine your skills in a safe and controlled environment.</p>
<p>We hope this guide has provided valuable insights and practical guidance, empowering you to take your knowledge and understanding of containerized applications and Kubernetes to the next level.</p>
<p>Hope you found this helpful. Do connect/ follow for more such content.</p>
<p>~GURJAR VISHAL</p>
]]></content:encoded></item></channel></rss>