<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[NextGenTechPicks - Cloud, Code, and Tech Reviews]]></title><description><![CDATA[NextGenTechPicks shares cloud engineering tutorials, backend development guides, DevOps automation tips, and honest tech product reviews for developers.]]></description><link>https://nextgentechpicks.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 11:03:45 GMT</lastBuildDate><atom:link href="https://nextgentechpicks.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AWS Secrets Manager vs Parameter Store: Which One Should You Use?]]></title><description><![CDATA[Key Points

AWS Secrets Manager is likely best for sensitive data needing automatic rotation, like database passwords, but it comes with higher costs.

AWS Systems Manager Parameter Store seems more cost-effective for configuration data or many small...]]></description><link>https://nextgentechpicks.com/aws-secrets-manager-vs-parameter-store-which-one-should-you-use</link><guid isPermaLink="true">https://nextgentechpicks.com/aws-secrets-manager-vs-parameter-store-which-one-should-you-use</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Security]]></category><category><![CDATA[software development]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Devops articles]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Mon, 28 Apr 2025 11:00:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745800918233/8ab9d7ac-c6b5-4cc7-8789-c11f487828a3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-key-points">Key Points</h2>
<ul>
<li><p><strong>AWS Secrets Manager</strong> is likely best for sensitive data needing automatic rotation, like database passwords, but it comes with higher costs.</p>
</li>
<li><p><strong>AWS Systems Manager Parameter Store</strong> seems more cost-effective for configuration data or many small secrets, with free standard parameters up to 10,000.</p>
</li>
<li><p>Both services can store secrets securely, but choosing depends on features like rotation, size, and budget.</p>
</li>
<li><p>There’s no major controversy, though some developers debate Secrets Manager’s cost versus its benefits.</p>
</li>
</ul>
<h2 id="heading-introduction">Introduction</h2>
<p>When it comes to managing secrets and configuration data on AWS, you have two main options: <strong>AWS Secrets Manager</strong>and <strong>AWS Systems Manager Parameter Store</strong>.<br />Both services can securely store sensitive information — but they differ in features, pricing, and ideal use cases.<br />In this guide, we’ll break down the key differences and help you decide which service best fits your project’s needs.</p>
<h2 id="heading-when-to-use-secrets-manager">When to Use Secrets Manager</h2>
<p>Use <strong>AWS Secrets Manager</strong> when you need stronger security features built right in. It's ideal for:</p>
<ul>
<li><p><strong>Automatic rotation</strong> of secrets, like database usernames and passwords</p>
</li>
<li><p><strong>Larger secrets</strong> (up to 64 KB in size)</p>
</li>
<li><p><strong>Cross-account access</strong> to share secrets safely across multiple AWS accounts</p>
</li>
<li><p><strong>Built-in password generation</strong> for creating strong, random credentials without extra tools</p>
</li>
</ul>
<p>If you’re dealing with critical credentials or want automated secret management, Secrets Manager is the way to go. 🔒</p>
<h2 id="heading-when-to-use-parameter-store">When to Use Parameter Store</h2>
<p><strong>AWS Systems Manager Parameter Store</strong> is perfect when you need something simple, flexible, and cost-friendly. Use it for:</p>
<ul>
<li><p><strong>Non-sensitive configuration data</strong> like feature flags, URLs, or API endpoints</p>
</li>
<li><p><strong>Lots of small secrets</strong> — it’s free for up to 10,000 standard parameters</p>
</li>
<li><p><strong>Parameter policies</strong> — set expirations or get notifications when parameters change</p>
</li>
<li><p><strong>Tight integration with Systems Manager</strong> for broader automation and operations tasks</p>
</li>
</ul>
<p>If you’re watching your budget or managing lots of small config values, Parameter Store is a great fit. 💸</p>
<h2 id="heading-quick-examples">Quick Examples</h2>
<ul>
<li><p><strong>Database Passwords:</strong> Use <strong>Secrets Manager</strong> so you can automatically rotate and secure your credentials without lifting a finger.</p>
</li>
<li><p><strong>API Keys:</strong> Store them in <strong>Parameter Store</strong> (<code>SecureString</code>) if you don't need rotation and want to keep costs low.</p>
</li>
<li><p><strong>Feature Flags:</strong> Save these in <strong>Parameter Store</strong> (<code>String</code>) since they’re typically non-sensitive and lightweight.</p>
</li>
</ul>
<h2 id="heading-key-differences">Key Differences</h2>
<p>The following table summarizes the main differences between Secrets Manager and Parameter Store:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Secrets Manager</td><td>Parameter Store</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Automatic Rotation</strong></td><td>Yes, with AWS services like RDS</td><td>No, manual rotation required</td></tr>
<tr>
<td><strong>Secret Size</strong></td><td>Up to 64 KB</td><td>Up to 4 KB (Standard), 8 KB (Advanced)</td></tr>
<tr>
<td><strong>Cross-account Access</strong></td><td>Yes</td><td>No</td></tr>
<tr>
<td><strong>Cost</strong></td><td>$0.40 per secret/month + $0.05 per 10,000 API calls</td><td>Free for standard parameters (up to 10,000); charges for advanced parameters</td></tr>
<tr>
<td><strong>Built-in Password Generator</strong></td><td>Yes</td><td>No</td></tr>
<tr>
<td><strong>Parameter Policies</strong></td><td>No</td><td>Yes (expiration, notifications)</td></tr>
<tr>
<td><strong>Versioning</strong></td><td>Multiple versions with staging labels</td><td>One active version at a time</td></tr>
<tr>
<td><strong>Primary Use</strong></td><td>Sensitive secrets management</td><td>Configuration data and secrets</td></tr>
</tbody>
</table>
</div><h2 id="heading-conclusion">Conclusion</h2>
<p>Both <strong>AWS Secrets Manager</strong> and <strong>Parameter Store</strong> are great tools for managing sensitive data — but they shine in different situations.</p>
<p>If you need automatic secret rotation, cross-account access, or built-in password generation, <strong>Secrets Manager</strong> is the way to go.<br />If you're managing lots of small secrets, non-sensitive config data, or want a more cost-effective option, <strong>Parameter Store</strong>might be a better fit.</p>
<p>Choosing the right service comes down to your project’s needs, security requirements, and budget.<br />Either way, AWS gives you flexible, secure options to keep your applications running smoothly. 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Launch a Static Website on S3 with Amplify in 15 Minutes]]></title><description><![CDATA[Introduction
With Amplify Hosting, you get built-in HTTPS, global CDN performance, and super simple updates — all without having to manage complex infrastructure.In this quick guide, we’ll walk through launching your static site in about 15 minutes —...]]></description><link>https://nextgentechpicks.com/launch-a-static-website-on-s3-with-amplify-in-15-minutes</link><guid isPermaLink="true">https://nextgentechpicks.com/launch-a-static-website-on-s3-with-amplify-in-15-minutes</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[hosting]]></category><category><![CDATA[Blogging]]></category><category><![CDATA[S3]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Security]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Mon, 28 Apr 2025 00:21:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745799497238/94fdc55f-122a-49e0-be71-4f9170adebff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>With Amplify Hosting, you get built-in HTTPS, global CDN performance, and super simple updates — all without having to manage complex infrastructure.<br />In this quick guide, we’ll walk through launching your static site in about 15 minutes — perfect for portfolios, landing pages, personal blogs, or any project you want to share with the world.</p>
<h2 id="heading-whats-a-static-website">What’s a Static Website?</h2>
<p>A static website is made up of simple, fixed files — like HTML, CSS, JavaScript, and images — that get delivered straight to your visitors, no server magic needed.<br />Unlike dynamic websites (think WordPress or apps that talk to a database), static sites are lightweight, blazing fast, and perfect for content that doesn't change often — like portfolios, blogs, or landing pages.</p>
<h2 id="heading-why-use-s3-and-amplify-for-hosting">Why Use S3 and Amplify for Hosting?</h2>
<p><strong>Amazon S3</strong> gives you reliable, scalable storage — and if you're just starting out, you can even stay inside the free tier (5 GB of storage and 20,000 GET requests a month for your first year). For small sites, costs are usually just pennies a month.</p>
<p><strong>AWS Amplify Hosting</strong> takes it even further: it connects with S3 behind the scenes, adds automatic HTTPS for security, speeds up your site with a global CloudFront CDN, and makes updates as easy as a single command. No manual bucket policies or tricky configurations needed.</p>
<p><strong>In short:</strong><br />✅ Secure (thanks to HTTPS)<br />✅ Easy to update<br />✅ Scales effortlessly with your traffic — no server headaches</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before starting, ensure you have:</p>
<ul>
<li><p>An <strong>AWS account</strong>. Sign up at <a target="_blank" href="http://aws.amazon.com">aws.amazon.com</a> if needed.</p>
</li>
<li><p><strong>Static website files</strong>, including an <code>index.html</code> file and any CSS, JavaScript, or images.</p>
</li>
<li><p><strong>Basic familiarity with the AWS Management Console</strong> — nothing fancy, just being comfortable clicking around.</p>
</li>
</ul>
<p><strong>Time Required</strong>: Approximately 15 minutes, assuming files are ready.</p>
<h2 id="heading-step-by-step-guide">Step-by-Step Guide</h2>
<h3 id="heading-step-1-create-an-s3-bucket">Step 1: Create an S3 Bucket</h3>
<ol>
<li><p>Sign in to the <a target="_blank" href="https://console.aws.amazon.com/">AWS Management Console</a> and navigate to the S3 service.</p>
</li>
<li><p>Click <strong>Create bucket</strong>.</p>
</li>
<li><p>Enter a unique name for your bucket (something like <code>my-static-site-2025</code>).<br /> <em>Reminder:</em> S3 bucket names must be globally unique across all of AWS.</p>
</li>
<li><p>Select a <strong>region</strong> closest to your audience for better performance (e.g., US East for North America).</p>
</li>
<li><p>Leave the default settings as they are — especially <strong>Block Public Access</strong> (we'll let Amplify handle the right permissions for us).</p>
</li>
</ol>
<p><strong>Note</strong>: You don’t need to enable static website hosting on S3 directly, as Amplify handles this.</p>
<h3 id="heading-step-2-upload-your-website-files">Step 2: Upload Your Website Files</h3>
<ol>
<li><p>Select your new bucket in the S3 console.</p>
</li>
<li><p>Click <strong>Upload</strong>.</p>
</li>
<li><p>Drag and drop your website files (e.g., <code>index.html</code>, <code>styles.css</code>, images) into the upload area.</p>
</li>
<li><p>Click <strong>Upload</strong> to add the files.</p>
</li>
</ol>
<p><strong>Tip</strong>: Make sure your <code>index.html</code> is at the <strong>root</strong> of the bucket (not inside a folder), unless you plan to update the Amplify settings later.<br />Also double-check that any links to CSS, JavaScript, or images inside your HTML match the structure you’re uploading — otherwise, you might get broken links later.</p>
<h3 id="heading-step-3-create-an-amplify-app-from-s3">Step 3: Create an Amplify App from S3</h3>
<ol>
<li><p>In the S3 console, select your bucket.</p>
</li>
<li><p>Go to the <strong>Properties</strong> tab.</p>
</li>
<li><p>Scroll to <strong>Static website hosting</strong> and click <strong>Create Amplify app</strong>.</p>
</li>
<li><p>You’ll be redirected to the Amplify console.</p>
</li>
<li><p>Enter an <strong>App name</strong> (e.g., <code>MyStaticWebsite</code>).</p>
</li>
<li><p>Enter a <strong>Branch name</strong> (e.g., <code>main</code>). This is a label for S3-based deployments, not tied to Git.</p>
</li>
<li><p>Click <strong>Save and deploy</strong>.</p>
</li>
</ol>
<p>Amplify will now start building and deploying your site automatically! 🚀</p>
<h3 id="heading-step-4-access-your-website">Step 4: Access Your Website</h3>
<ol>
<li><p>In the Amplify console, select your app.</p>
</li>
<li><p>Click on the branch (e.g., <code>main</code>).</p>
</li>
<li><p>Under <strong>Domain</strong>, find the deployed URL (e.g., <a target="_blank" href="https://main.d123456.amplifyapp.com"><code>https://main.d123456.amplifyapp.com</code></a>).</p>
</li>
<li><p>Click <strong>Visit deployed URL</strong> to view your live website.</p>
</li>
</ol>
<p>Your site is now accessible to the world, fully secured with <strong>HTTPS</strong> by default.</p>
<h2 id="heading-updating-your-website">Updating Your Website</h2>
<p>To update content:</p>
<ol>
<li><p>Upload new or modified files to your S3 bucket.</p>
</li>
<li><p>In the Amplify console, select your app and branch.</p>
</li>
<li><p>Click <strong>Deploy updates</strong> to redeploy the latest files.</p>
</li>
</ol>
<p>This one-click process ensures your site stays current without complex workflows.</p>
<h2 id="heading-optional-configure-a-custom-domain">Optional: Configure a Custom Domain</h2>
<p>Want to give your website a more professional touch? You can easily add a custom domain.</p>
<ol>
<li><p>In the Amplify console, select your app.</p>
</li>
<li><p>Go to <strong>Custom domains</strong> and click <strong>Add domain</strong>.</p>
</li>
<li><p>Follow prompts to configure your domain via AWS Route 53 or another DNS provider.</p>
</li>
<li><p>Amplify issues an SSL/TLS certificate for HTTPS.</p>
</li>
</ol>
<p>DNS changes can take a little time to propagate, so it might take a few minutes (or up to a couple of hours) before your custom domain is live. For details, see the <a target="_blank" href="https://docs.aws.amazon.com/amplify/latest/userguide/custom-domains.html">Amplify Hosting User Guide</a>.</p>
<h2 id="heading-cost-considerations">Cost Considerations</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Service</td><td>Cost Details</td><td>Free Tier</td></tr>
</thead>
<tbody>
<tr>
<td><strong>S3</strong></td><td>Storage: ~$0.023/GB/month; Data transfer: ~$0.09/GB (beyond free tier). Small sites cost cents monthly.</td><td>5 GB storage, 20,000 GET requests, 2,000 PUT requests for 12 months.</td></tr>
<tr>
<td><strong>Amplify Hosting</strong></td><td>Build: ~$0.023/minute; Hosting: ~$0.023/GB stored, $0.013/GB served. Minimal for small sites.</td><td>None, but pay-as-you-go is low-cost.</td></tr>
</tbody>
</table>
</div><p>Check out the <a target="_blank" href="https://aws.amazon.com/s3/pricing/">S3 pricing</a> and <a target="_blank" href="https://aws.amazon.com/amplify/pricing/">Amplify pricing</a> pages for details. Most small sites stay within the free tier or cost under $1/month.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>And just like that — you’ve launched a secure, scalable static website using <strong>S3</strong> and <strong>Amplify Hosting</strong> in around 15 minutes!<br />This setup is perfect for developers, bloggers, or small businesses looking for a low-cost, low-maintenance way to get online fast.</p>
<p>As your site grows, you can easily explore even more Amplify features — like setting up a custom domain, adding redirects, or even connecting to a GitHub repo for automatic CI/CD deployments.<br />The sky’s the limit!</p>
]]></content:encoded></item><item><title><![CDATA[Dockerfiles 101: A Practical Guide to Building Efficient Images]]></title><description><![CDATA[Introduction
A Dockerfile is a simple text file with a list of instructions used to build a Docker image — the foundation for creating containerized applications. It automates everything from the base OS to app code, dependencies, and runtime setup, ...]]></description><link>https://nextgentechpicks.com/dockerfiles-101-a-practical-guide-to-building-efficient-images</link><guid isPermaLink="true">https://nextgentechpicks.com/dockerfiles-101-a-practical-guide-to-building-efficient-images</guid><category><![CDATA[AWS]]></category><category><![CDATA[containerization]]></category><category><![CDATA[containers]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Security]]></category><category><![CDATA[tools]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Thu, 24 Apr 2025 01:50:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745458893687/004f2959-547f-43ed-b61d-24714221726f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>A <em>Dockerfile</em> is a simple text file with a list of instructions used to build a Docker image — the foundation for creating containerized applications. It automates everything from the base OS to app code, dependencies, and runtime setup, making your environments consistent and repeatable.</p>
<p>This setup is crucial for modern development. With Dockerfiles, you eliminate the classic "it works on my machine" issue by standardizing how your app is built and run — locally, in CI/CD pipelines, and in production.</p>
<p>Here’s where Dockerfiles shine:</p>
<ul>
<li><p><strong>Local Development</strong>: Spin up production-like environments with ease.</p>
</li>
<li><p><strong>CI/CD Pipelines</strong>: Automate image builds and deployments for faster, safer shipping.</p>
</li>
<li><p><strong>Production Deployment</strong>: Run apps reliably across any environment using the same image every time.</p>
</li>
</ul>
<h1 id="heading-basic-dockerfile-anatomy">Basic Dockerfile Anatomy</h1>
<p>A Dockerfile is made up of simple, declarative instructions that are executed in order to build a Docker image. Each instruction creates a new image layer, forming the final structure of your container. Here are some of the most commonly used instructions and what they do:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Instruction</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>FROM</strong></td><td>Sets the base image for the build (e.g., <code>FROM python:3.9</code>). It’s the starting point and must be the first instruction (excluding comments).</td></tr>
<tr>
<td><strong>RUN</strong></td><td>Executes a command at build time (e.g., <code>RUN pip install -r requirements.txt</code>). Typically used to install packages or modify the image.</td></tr>
<tr>
<td><strong>COPY</strong></td><td>Copies files or directories from your host into the image (e.g., <code>COPY . /app</code>).</td></tr>
<tr>
<td><strong>CMD</strong></td><td>Specifies the default command to run when the container starts (e.g., <code>CMD ["python", "app.py"]</code>). Only the last <code>CMD</code> is used.</td></tr>
<tr>
<td><strong>ADD</strong></td><td>Like <code>COPY</code>, but can also handle remote URLs and extract <code>.tar</code> files (e.g., <code>ADD file.tar.gz /app</code>).</td></tr>
<tr>
<td><strong>ENTRYPOINT</strong></td><td>Defines the main command that always runs in the container (e.g., <code>ENTRYPOINT ["python"]</code>). Can be combined with <code>CMD</code> to pass default arguments.</td></tr>
<tr>
<td><strong>EXPOSE</strong></td><td>Documents the port the container will listen on (e.g., <code>EXPOSE 5000</code>). This <strong>doesn't</strong> publish the port — that’s done with <code>-p</code> when running the container.</td></tr>
</tbody>
</table>
</div><h3 id="heading-sample-minimal-dockerfile-for-a-python-app">Sample Minimal Dockerfile for a Python App</h3>
<p>Here’s a minimal Dockerfile for a Python app using Flask, assuming you have an <a target="_blank" href="http://app.py"><code>app.py</code></a> file as the entry point and a <code>requirements.txt</code> for dependencies:</p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> python:<span class="hljs-number">3.9</span>

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-keyword">COPY</span><span class="bash"> requirements.txt .</span>

<span class="hljs-keyword">RUN</span><span class="bash"> pip install --no-cache-dir -r requirements.txt</span>

<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"python"</span>, <span class="hljs-string">"app.py"</span>]</span>
</code></pre>
<p>This Dockerfile:</p>
<ul>
<li><p>Uses <code>python:3.9</code> as a base image.</p>
</li>
<li><p>Sets <code>/app</code> as the working directory.</p>
</li>
<li><p>Copies and installs dependencies from <code>requirements.txt</code>.</p>
</li>
<li><p>Copies the application code from your local directory to the working directory <code>/app</code>.</p>
</li>
<li><p>Runs <a target="_blank" href="http://app.py"><code>app.py</code></a> when the container starts.</p>
</li>
</ul>
<h1 id="heading-best-practices">Best Practices</h1>
<p>Following best practices when writing Dockerfiles helps you create images that are efficient, secure, and easy to maintain. Here are some key guidelines recommended by the Docker community:</p>
<ul>
<li><p><strong>Pin Base Image Versions</strong><br />  Always specify an exact version for your base image (e.g., <code>python:3.9.5-slim</code>) instead of using <code>latest</code>. This ensures consistency and avoids unexpected behavior due to upstream changes.</p>
<pre><code class="lang-dockerfile">  <span class="hljs-comment"># Avoid</span>
  <span class="hljs-keyword">FROM</span> python:latest
  <span class="hljs-comment"># Prefer</span>
  <span class="hljs-keyword">FROM</span> python:<span class="hljs-number">3.9</span>.<span class="hljs-number">5</span>-slim
</code></pre>
<ul>
<li><p><strong>Use</strong> <code>.dockerignore</code><br />  Create a <code>.dockerignore</code> file to leave out unnecessary files like <code>.git</code>, <code>node_modules</code>, <code>.pyc</code>, and other temp files from your build context. This helps shrink your image size and speeds up build times.</p>
</li>
<li><p><strong>Combine RUN Commands</strong>: Each <code>RUN</code> instruction creates a new layer, increasing image size. Combine commands using <code>&amp;&amp;</code> or multi-line syntax to minimize layers. For example:</p>
</li>
</ul>
</li>
<li><pre><code class="lang-dockerfile">    <span class="hljs-comment"># Avoid:</span>
    <span class="hljs-keyword">RUN</span><span class="bash"> apt-get update</span>
    <span class="hljs-keyword">RUN</span><span class="bash"> apt-get install -y wget</span>

    <span class="hljs-comment"># Prefer:</span>
    <span class="hljs-keyword">RUN</span><span class="bash"> apt-get update &amp;&amp; apt-get install -y wget</span>
</code></pre>
</li>
<li><p><strong>Avoid the</strong> <code>latest</code> Tag<br />  Using <code>latest</code> might seem convenient, but it can introduce unexpected changes as the image gets updated over time. Always pin your base image to a specific version to ensure consistent and predictable builds.</p>
</li>
</ul>
<p>These practices, based on <a target="_blank" href="https://docs.docker.com/build/building/best-practices/">Docker’s official best practices</a>, help you build leaner, more secure, and more reliable images.</p>
<h1 id="heading-intermediate-tips">Intermediate Tips</h1>
<p>Once you’ve got the basics down, these intermediate techniques can help you take your Dockerfiles to the next level:</p>
<ul>
<li><p><strong>Set Environment Variables</strong><br />  Use the <code>ENV</code> instruction to define environment variables that are accessible at runtime. It’s a clean way to configure your app without hardcoding values into your codebase.</p>
<pre><code class="lang-dockerfile">  <span class="hljs-comment"># Set Flask environment mode</span>
  <span class="hljs-keyword">ENV</span> FLASK_ENV=production

  <span class="hljs-comment"># Disable Python .pyc bytecode file generation</span>
  <span class="hljs-keyword">ENV</span> PYTHONDONTWRITEBYTECODE=<span class="hljs-number">1</span>

  <span class="hljs-comment"># Example API key placeholder</span>
  <span class="hljs-keyword">ENV</span> API_KEY=your-api-key-here
</code></pre>
</li>
<li><p><strong>Caching and Layer Optimization</strong><br />  Docker caches image layers to speed up rebuilds. To take advantage of this, order your instructions from least to most frequently changing. For example, copy and install dependencies <strong>before</strong> adding your full application code — that way, Docker can reuse the cached layers if your app code changes but your dependencies don't.</p>
<pre><code class="lang-dockerfile">  <span class="hljs-comment"># Copy and install dependencies first</span>
  <span class="hljs-keyword">COPY</span><span class="bash"> requirements.txt .</span>
  <span class="hljs-keyword">RUN</span><span class="bash"> pip install -r requirements.txt</span>

  <span class="hljs-comment"># Then copy the rest of the app</span>
  <span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
</code></pre>
</li>
<li><p><strong>Add Health Checks</strong><br />  Use the <code>HEALTHCHECK</code> instruction to define how Docker should check if your container is still healthy. This is useful for monitoring and restarting unhealthy containers automatically.</p>
<pre><code class="lang-dockerfile">  <span class="hljs-keyword">HEALTHCHECK</span><span class="bash"> --interval=5m --timeout=5s \
    CMD curl -f http://localhost:5001/health || <span class="hljs-built_in">exit</span> 1</span>
</code></pre>
<p>  In this example, Docker pings a Flask app’s <code>/health</code> endpoint every 5 minutes. If the check fails, the container is marked as unhealthy. Additionally, make sure <code>curl</code> is installed in your image for this to work.</p>
</li>
</ul>
<h1 id="heading-multi-stage-builds">🧪Multi-stage Builds</h1>
<p>Multi-stage builds let you use multiple <code>FROM</code> statements to define separate stages in your Dockerfile — typically one for building and another for running your app. This helps keep your final image clean and lightweight by copying only what’s needed into the runtime stage.</p>
<p>It’s a great way to reduce image size, boost security, and streamline builds — especially for Python and Flask apps.</p>
<p>Here’s an example for a Flask app:</p>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Build stage</span>
<span class="hljs-keyword">FROM</span> python:<span class="hljs-number">3.9</span>-slim AS builder

<span class="hljs-comment"># Set working directory in the image</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Copy only the dependency file to leverage Docker layer caching</span>
<span class="hljs-keyword">COPY</span><span class="bash"> requirements.txt .</span>

<span class="hljs-comment"># Install dependencies to a custom location to keep the runtime image clean</span>
<span class="hljs-keyword">RUN</span><span class="bash"> pip install --prefix=/install -r requirements.txt</span>

<span class="hljs-comment"># Runtime stage: Copy only what's needed to run the app</span>
<span class="hljs-keyword">FROM</span> python:<span class="hljs-number">3.9</span>-slim

<span class="hljs-comment"># Set the same working directory</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Copy the installed packages from the builder stage</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=builder /install /usr/<span class="hljs-built_in">local</span></span>

<span class="hljs-comment"># Copy the rest of the application code</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment"># Define the default command to run your app</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"python3"</span>, <span class="hljs-string">"docker-entrypoint.py"</span>]</span>
</code></pre>
<p>In this Dockerfile:</p>
<ul>
<li><p>The <strong>build stage</strong> uses <code>python:3.9-slim</code> to install dependencies in an isolated directory.</p>
</li>
<li><p>The <strong>runtime stage</strong> also uses <code>python:3.9-slim</code>, but only copies in the installed packages and app code — leaving behind build tools and temp files.</p>
</li>
<li><p>The result is a smaller, cleaner image that’s easier to ship and more secure.</p>
</li>
</ul>
<p>While multi-stage builds are often used in compiled languages like Go or Java, they can also streamline Python apps by keeping only what’s needed in the final image. This approach aligns with <a target="_blank" href="https://docs.docker.com/get-started/docker-concepts/building-images/multi-stage-builds/">Docker’s official guidance on multi-stage builds</a>.</p>
<h1 id="heading-security-considerations">Security Considerations</h1>
<p>Security is a key part of working with containers. Following a few simple practices can significantly reduce the risk of vulnerabilities in your Docker images.</p>
<ul>
<li><p><strong>Use a Non-root User</strong><br />  By default, containers run as the <code>root</code> user, which can be dangerous if the container is ever compromised. To limit permissions, create a dedicated non-root user and switch to it using the <code>USER</code> instruction:</p>
<pre><code class="lang-dockerfile">  <span class="hljs-comment"># Create a non-root user</span>
  <span class="hljs-keyword">RUN</span><span class="bash"> useradd -m appuser</span>

  <span class="hljs-comment"># Switch to the non-root user</span>
  <span class="hljs-keyword">USER</span> appuser
</code></pre>
<p>  This limits the container’s permissions, reducing potential damage.</p>
</li>
<li><ul>
<li><p><strong>Scan Images for Vulnerabilities</strong><br />    Use tools like <a target="_blank" href="https://github.com/aquasecurity/trivy">Trivy</a> or <a target="_blank" href="https://snyk.io/">Snyk</a> to scan your images for known vulnerabilities in OS packages and dependencies. Add scans to your CI pipeline to catch issues early.</p>
<ul>
<li><p><strong>Keep Images Small and Minimal</strong><br />  Use slim or minimal ba<a target="_blank" href="https://github.com/aquasecurity/trivy">se im</a>age<a target="_blank" href="https://snyk.io/">s (e</a>.g., <code>python:3.9-slim</code>) and avoid installing unnecessary tools. Smaller images are not only faster to build and ship — they also reduce your attack surface.</p>
</li>
<li><p><strong>Clean Up After Installations</strong><br />  After installing packages, remove cache and temporary files to avoid bloating your image:</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<pre><code class="lang-bash">RUN apt-get update &amp;&amp; apt-get install -y curl \
    &amp;&amp; rm -rf /var/lib/apt/lists/*
</code></pre>
<h1 id="heading-wrapping-up">Wrapping up</h1>
<p>In this guide, we walked through the essentials of writing effective Dockerfiles — from basic instructions and best practices to intermediate optimizations, multi-stage builds, and security hardening tips.</p>
<p>By following these patterns, you can build Docker images that are clean, reliable, and production-ready — whether you're spinning up a local dev environment or deploying at scale.</p>
<p>For more inspiration, check out the <a target="_blank" href="https://github.com/docker-library/docs">official Docker samples on GitHub</a> — they showcase Dockerfiles for a wide range of real-world applications.</p>
<p>Have a tip, question, or trick you use in your own Dockerfiles? Drop it in the comments — let’s learn together.</p>
]]></content:encoded></item><item><title><![CDATA[Step-by-Step Guide to Importing Existing AWS Resources into Terraform]]></title><description><![CDATA[Manually managing AWS resources can get messy fast—one small change here, another click there, and suddenly you’re not sure what’s deployed or how. That’s where Terraform comes in. As Infrastructure as Code (IaC), it gives you version control, repeat...]]></description><link>https://nextgentechpicks.com/step-by-step-guide-to-importing-existing-aws-resources-into-terraform</link><guid isPermaLink="true">https://nextgentechpicks.com/step-by-step-guide-to-importing-existing-aws-resources-into-terraform</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[technology]]></category><category><![CDATA[Developer]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Wed, 23 Apr 2025 09:00:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745383067752/f6e44d8b-e41f-40dc-86b2-773ec33baa6a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Manually managing AWS resources can get messy fast—one small change here, another click there, and suddenly you’re not sure what’s deployed or how. That’s where Terraform comes in. As Infrastructure as Code (IaC), it gives you version control, repeatability, and way less clicking through the console.</p>
<p>But what if you already have AWS resources live and running? Rebuilding them from scratch in Terraform sounds painful—and risky. That’s where <code>terraform import</code> helps. It lets you bring existing infrastructure under Terraform’s control without tearing anything down.</p>
<p>Whether you’re migrating from manual setups, starting to roll out IaC, or just want a cleaner way to manage your AWS environment, this guide will walk you through importing an existing resource—like an S3 bucket—into Terraform. You’ll get clear steps, real code examples, and a few bonus tips to avoid the usual headaches.</p>
<hr />
<h2 id="heading-why-terraform-import-matters">Why Terraform Import Matters</h2>
<p>Let’s say you’ve got an S3 bucket that’s been running for months—maybe it was created through the AWS console or a one-off script. Now you’re ready to manage it with Terraform so you can version control it, track changes, and avoid the manual config drift that happens over time.</p>
<p>That’s where <code>terraform import</code> comes in. It connects existing AWS resources to your Terraform state file without needing to tear anything down or rebuild from scratch. From that point on, you can manage the resource declaratively—like it was written in Terraform from day one.</p>
<hr />
<h2 id="heading-step-by-step-breakdown">Step-by-Step Breakdown</h2>
<p>Let’s walk through how to bring an existing AWS resource under Terraform’s control. We’ll use an S3 bucket as the example, but this same process works for pretty much any AWS resource—EC2, IAM, VPCs, etc.</p>
<h3 id="heading-1-understand-what-terraform-import-actually-does">1. Understand What <code>terraform import</code> Actually Does</h3>
<p>Before you jump in, it's important to get what <code>terraform import</code> really does—and what it doesn’t.</p>
<p><strong>✅ What it does:</strong><br />It connects a real AWS resource (like an existing S3 bucket) to your Terraform state file, so Terraform can start tracking it.</p>
<p><strong>❌ What it doesn’t do:</strong><br />It won’t magically create <code>.tf</code> files for you. You still need to write the resource block yourself—even if it’s empty to start.</p>
<p>Think of it like claiming a resource: Terraform starts managing it, but you still have to tell it <em>what</em> it's managing. No config block, no control.</p>
<h3 id="heading-2-write-the-resource-block-first">2. Write the Resource Block First</h3>
<p>Start by defining the resource in your Terraform configuration. It can be empty for now—just make sure to give it a name that’ll match the import. For an S3 bucket:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># main.tf</span>
resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"my_bucket"</span> {
  <span class="hljs-comment"># Leave this empty - You'll circle back to this later</span>
}
</code></pre>
<p>This block lets Terraform know you plan to manage a resource named <code>my_bucket</code>. The <code>aws_s3_bucket</code> part defines the type of resource (in this case, an S3 bucket), and <code>my_bucket</code> is the name you'll reference within your Terraform configuration.</p>
<h3 id="heading-3-run-terraform-init">3. Run <code>terraform init</code></h3>
<p>Before importing anything, make sure your Terraform environment is initialized. This sets up the AWS provider and configures your backend (like S3 or Terraform Cloud) if you're using remote state storage.</p>
<pre><code class="lang-bash">terraform init
</code></pre>
<p>If you haven’t configured your AWS provider yet, make sure to add this to a <a target="_blank" href="http://provider.tf"><code>provider.tf</code></a> file:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"us-east-1"</span>  <span class="hljs-comment"># Adjust to your working region</span>
}
</code></pre>
<p>After adding the provider block, run <code>terraform init</code> again. You should see output confirming that the AWS provider was successfully installed.</p>
<h3 id="heading-4-run-the-import-command">4. Run the Import Command</h3>
<p>Now for the fun part—importing the real AWS resource into your Terraform state. If you have an S3 bucket named <code>my-existing-bucket-name</code>, run the following command:</p>
<pre><code class="lang-bash">terraform import aws_s3_bucket.my_bucket my-existing-bucket-name
</code></pre>
<ul>
<li><p><code>aws_s3_</code><a target="_blank" href="http://bucket.my"><code>bucket.my</code></a><code>_bucket</code>: This matches the resource type and name defined in your <code>.tf</code> file.</p>
</li>
<li><p><code>my-existing-bucket-name</code>: This is the actual name of the S3 bucket in your AWS account that you want to bring under Terraform management.</p>
</li>
</ul>
<p>If the import is successful, Terraform will update your <code>terraform.tfstate</code> file with the bucket’s metadata. You’ll see a confirmation message like:</p>
<pre><code class="lang-bash">aws_s3_bucket.my_bucket: Import completed!
</code></pre>
<h3 id="heading-5-run-terraform-state-show">5. Run <code>terraform state show</code></h3>
<p>The import doesn’t populate your <code>.tf</code> file—it only updates the state. To see what Terraform found, run:</p>
<pre><code class="lang-bash">terraform state show aws_s3_bucket.my_bucket
</code></pre>
<p>This outputs all the bucket’s properties within terraform:</p>
<pre><code class="lang-bash">id                  = <span class="hljs-string">"my-existing-bucket-name"</span>
bucket              = <span class="hljs-string">"my-existing-bucket-name"</span>
acl                 = <span class="hljs-string">"private"</span>
versioning {
  enabled = <span class="hljs-literal">false</span>
}
</code></pre>
<p>Use the output to map out your resource block. Now update <a target="_blank" href="http://main.tf"><code>main.tf</code></a> to match it:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"my_bucket"</span> {
  bucket = <span class="hljs-string">"my-existing-bucket-name"</span>
  acl    = <span class="hljs-string">"private"</span>
}
</code></pre>
<p>Now, Terraform fully manages the bucket. To confirm it, run <code>terraform plan</code>. If everything aligns, no changes will be found.</p>
<hr />
<h2 id="heading-common-pitfalls-to-avoid">⚠️ Common Pitfalls to Avoid</h2>
<p>Even experienced engineers run into these when using <code>terraform import</code>:</p>
<ul>
<li><p><strong>Skipping</strong> <code>terraform init</code><br />  Without initialization, the import command will fail with a provider error.</p>
</li>
<li><p><strong>Mismatched Resource Name</strong><br />  If Terraform doesn’t find a matching block like <code>aws_s3_</code><a target="_blank" href="http://bucket.my"><code>bucket.my</code></a><code>_bucket</code> in your code, it won’t know where to import the resource.</p>
</li>
<li><p><strong>Expecting Code Generation</strong><br />  <code>terraform import</code> updates the state, not your <code>.tf</code> files. Be sure to run <code>terraform state show</code> afterward and manually update your config.</p>
</li>
<li><p><strong>Incorrect Resource Identifiers</strong><br />  For complex resources like IAM roles or WAFs, make sure you use the exact ID or ARN format AWS expects—sometimes it’s just the name, other times it needs the full ARN.</p>
</li>
</ul>
<hr />
<h2 id="heading-bonus-tips">Bonus Tips</h2>
<ul>
<li><ul>
<li><p><strong>Bulk Imports with Terraformer</strong><a target="_blank" href="https://github.com/GoogleCloudPlatform/terraformer">Importing</a> a single resource manually is fine—but if you're dealing with dozens, check out <a target="_blank" href="https://github.com/GoogleCloudPlatform/terraformer">Terraformer</a>. It auto-generates both <code>.tf</code> files and state from your existing AWS setup.</p>
<ul>
<li><p><strong>Use Workspaces for Safe Testi</strong><a target="_blank" href="https://github.com/GoogleCloudPlatform/terraformer"><strong>ng</strong><br />  Create a</a> separate <code>terraform workspace</code> when testing imports. It keeps your main state clean and reduces risk while you experiment.</p>
</li>
<li><p><strong>Script Your Imports</strong><br />  For repet<a target="_blank" href="https://github.com/GoogleCloudPlatform/terraformer">itive impor</a>ts like EC2 instances or security groups, use a simple Bash loop to automate it. Example:</p>
</li>
</ul>
</li>
</ul>
</li>
<li><pre><code class="lang-bash">  instances=(<span class="hljs-string">"i-1234567891"</span> <span class="hljs-string">"i-234567821"</span>)

  <span class="hljs-keyword">for</span> instance_id <span class="hljs-keyword">in</span> <span class="hljs-string">"<span class="hljs-variable">${instances[@]}</span>"</span>; <span class="hljs-keyword">do</span>
    terraform import <span class="hljs-string">"aws_instance.ec2_<span class="hljs-variable">${instance_id}</span>"</span> <span class="hljs-string">"<span class="hljs-variable">$instance_id</span>"</span>
  <span class="hljs-keyword">done</span>
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-ultimately">Ultimately</h2>
<p><code>terraform import</code> is one of the most underrated tools in the Terraform workflow. When combined with <code>terraform state show</code>, it gives you a safe path to bring existing AWS resources under Infrastructure as Code—without having to rebuild anything.</p>
<p>Now you’ve got the steps, the tools, and the confidence to start managing your cloud infrastructure the right way.</p>
<p>Hit a wall importing a resource? Drop it below and let’s troubleshoot it together.</p>
]]></content:encoded></item><item><title><![CDATA[5 AWS IAM Misconfigurations You Might Not Know You’re Making]]></title><description><![CDATA[AWS Identity and Access Management (IAM) is the foundation of your cloud security—yet it’s also one of the most misunderstood services in AWS. If you’ve ever opened an IAM policy and felt overwhelmed, you’re not alone. The good news? You don’t need t...]]></description><link>https://nextgentechpicks.com/5-aws-iam-misconfigurations-you-might-not-know-youre-making</link><guid isPermaLink="true">https://nextgentechpicks.com/5-aws-iam-misconfigurations-you-might-not-know-youre-making</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Security]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Developer]]></category><category><![CDATA[IAM]]></category><category><![CDATA[aws security]]></category><category><![CDATA[AWS IAM]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Wed, 23 Apr 2025 03:49:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745379867831/4b5600f2-2de3-48c5-a9fc-7cb2ff270493.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>AWS Identity and Access Management (IAM) is the foundation of your cloud security—yet it’s also one of the most misunderstood services in AWS.</strong> If you’ve ever opened an IAM policy and felt overwhelmed, you’re not alone. The good news? You don’t need to be a security expert to avoid common pitfalls. With just a bit of awareness, you can lock down your IAM setup and reduce risk significantly.</p>
<p>In this article, we’ll walk through five common IAM misconfigurations that could be hiding in your environment. For each one, you’ll get a clear breakdown of the problem, a real-world example, and actionable steps to fix it. Whether you’re a cloud engineer, DevOps practitioner, or AWS beginner, this guide will help you build a more secure and well-structured IAM foundation.</p>
<hr />
<h2 id="heading-1-overly-broad-permissions-using-too-often">1. Overly Broad Permissions (Using * too often)</h2>
<p><strong>Problem:</strong> It’s easy to slap <code>Action: "*"</code> or <code>Resource: "*"</code> into an IAM policy and call it a day. Who doesn’t love a shortcut? But this wildcard approach hands out way more access than needed, opening the door to accidental damage or malicious exploits.</p>
<p><strong>Example:</strong> Suppose you write a policy like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:*"</span>,
  <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>
}
</code></pre>
<p>This lets the user do <em>anything</em> to <em>any</em> S3 bucket—upload, delete, or even wipe out your entire data lake. Imagine a junior dev with this policy accidentally running a script that deletes critical backups. Yikes.</p>
<p><strong>Fix:</strong> Scope it down. Specify exact actions and resources. For example, if someone only needs to read from a specific bucket, use:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:GetObject"</span>,
  <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::my-bucket/*"</span>
}
</code></pre>
<p>Test it first with the <code>iam:SimulatePrincipalPolicy</code> API or the IAM Policy Simulator to confirm it works as intended.</p>
<hr />
<h2 id="heading-2-inline-policies-everywhere">2. Inline Policies Everywhere</h2>
<p><strong>Problem:</strong> Inline policies—those one-off rules attached directly to a user, group, or role—are convenient until they’re not. They’re tough to audit, prone to duplicate logic, and a nightmare to update across your account.</p>
<p><strong>Example:</strong> You’ve got a developer with an inline policy granting EC2 access. Then another dev gets a slightly different inline policy. Soon, you’ve got a dozen custom policies doing similar things, and no one knows who has what.</p>
<p><strong>Fix:</strong> Switch to managed policies. Create a reusable policy like “EC2ReadOnly” and attach it to multiple users or roles. Updates are a breeze—just edit the managed policy once, and everyone’s covered.</p>
<p><strong>Example:</strong> A managed policy might look like:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Action"</span>: [<span class="hljs-string">"ec2:Describe*"</span>],
  <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>
}
</code></pre>
<hr />
<h2 id="heading-3-not-using-conditions">3. Not Using Conditions</h2>
<p><strong>Problem:</strong> Policies without conditions are like doors without locks. They lack the fine-grained control to restrict access by location, time, or authentication method, leaving you vulnerable.</p>
<p><strong>Example:</strong> A policy allows <code>iam:UpdateUser</code> with no conditions. Someone with stolen credentials could reset passwords from anywhere, anytime—no MFA required.</p>
<p><strong>Fix:</strong> Add condition blocks. For instance, enforce MFA and limit access to your corporate IP range:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"iam:UpdateUser"</span>,
  <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>,
  <span class="hljs-attr">"Condition"</span>: {
    <span class="hljs-attr">"Bool"</span>: {<span class="hljs-attr">"aws:MultiFactorAuthPresent"</span>: <span class="hljs-string">"true"</span>},
    <span class="hljs-attr">"IpAddress"</span>: {<span class="hljs-attr">"aws:SourceIp"</span>: <span class="hljs-string">"203.0.113.0/24"</span>}
  }
}
</code></pre>
<hr />
<h2 id="heading-4-trust-policies-without-proper-restriction">4. Trust Policies Without Proper Restriction</h2>
<p><strong>Problem:</strong> IAM roles rely on trust policies to define who can assume them via <code>sts:AssumeRole</code>. An overly permissive trust policy might let anyone—or anything—step into a powerful role.</p>
<p><strong>Example:</strong> A trust policy like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Principal"</span>: {<span class="hljs-attr">"AWS"</span>: <span class="hljs-string">"*"</span>},
  <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>
}
</code></pre>
<p>This lets <em>any AWS account</em> assume the role. A compromised key from another account could waltz right in.</p>
<p><strong>Fix:</strong> Lock it down. Specify exact principals or use conditions. For a Lambda role, try:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Principal"</span>: {<span class="hljs-attr">"Service"</span>: <span class="hljs-string">"lambda.amazonaws.com"</span>},
  <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>
}
</code></pre>
<p>For cross-account access, include the account ID and add MFA or OIDC checks.</p>
<hr />
<h2 id="heading-5-forgotten-iam-users-and-roles">5. Forgotten IAM Users and Roles</h2>
<p><strong>Problem:</strong><br />Old users and roles with lingering access keys or broad permissions are silent threats. A former employee's credentials or an unused role could remain active for months—just waiting to be exploited.</p>
<p><strong>Example:</strong><br />An intern finishes their contract, but their IAM user still has an active access key attached to a permissive policy. Months later, that key is accidentally exposed online, putting your infrastructure at risk.</p>
<p><strong>Fix:</strong><br />Perform regular IAM audits using:</p>
<ul>
<li><p><strong>IAM Credential Report</strong> – View all IAM users and the status of their access keys.</p>
</li>
<li><p><strong>Access Analyzer</strong> – Identify unused permissions or unexpected external access.</p>
</li>
<li><p><strong>IAM Access Advisor</strong> – See which permissions are actually being used by roles and users.</p>
</li>
</ul>
<p>For extra protection, <strong>automate cleanup</strong> using a Lambda script that disables or rotates access keys older than 90 days.</p>
<hr />
<h2 id="heading-bonus-tips-for-stronger-iam-practices">🔐 Bonus Tips for Stronger IAM Practices</h2>
<ul>
<li><p><strong>Enable IAM Access Analyzer</strong><br />  Identify unintended access paths and external exposure before they become security issues.</p>
</li>
<li><p><strong>Enforce MFA for Sensitive Actions</strong><br />  Make multi-factor authentication mandatory for high-privilege users—it's one of the simplest ways to block unauthorized access.</p>
</li>
<li><p><strong>Use Service Control Policies (SCPs) in AWS Organizations</strong><br />  Apply guardrails across accounts to prevent risky actions, even if someone tries to bypass IAM.</p>
</li>
<li><p><strong>Automate Access Key Rotation</strong><br />  Use tools like AWS Secrets Manager, Lambda, or scheduled workflows to regularly rotate credentials and eliminate stale access.</p>
</li>
</ul>
<hr />
<h2 id="heading-call-to-action">Call to Action</h2>
<p><strong>IAM doesn’t have to be complicated—but it is absolutely essential.</strong> Now that you’ve seen the most common misconfigurations and how to fix them, you’re in a great position to audit your own environment and tighten things up.</p>
<p>If you found this helpful, bookmark it—and share it with a teammate who’s just getting started with AWS. You might save them from making the same mistakes.</p>
<p><strong>Cloud security starts with access control. The best time to improve it? Right now.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Launch Your First AWS EC2 Instance]]></title><description><![CDATA[Introduction
Amazon EC2 (Elastic Compute Cloud) is a cloud service that provides resizable virtual servers (instances) for running applications. Developers and Operations teams use it to deploy and scale workloads with flexible cost and power.

EC2 K...]]></description><link>https://nextgentechpicks.com/launch-your-first-aws-ec2-instance</link><guid isPermaLink="true">https://nextgentechpicks.com/launch-your-first-aws-ec2-instance</guid><category><![CDATA[cdevops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[tutorials]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Sun, 20 Apr 2025 18:34:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745173939009/4c3ff574-c42d-40da-b7d9-bf9bf38a3e75.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Amazon EC2 (Elastic Compute Cloud) is a cloud service that provides resizable virtual servers (instances) for running applications. Developers and Operations teams use it to deploy and scale workloads with flexible cost and power.</p>
<hr />
<h2 id="heading-ec2-kick-starter-points">EC2 Kick Starter Points 🚀</h2>
<ul>
<li><p><strong>Instances</strong>: Virtual machines with customizable CPU, memory, storage, and OS options (Linux, Windows, macOS, Raspberry Pi OS).</p>
</li>
<li><p><strong>Scalability</strong>: Spin up or spin down instances on demand, or use auto-scaling based on traffic, CPU, memory, or custom metrics.</p>
</li>
<li><p><strong>Pricing</strong>: Flexible pricing options — Pay-as-you-go (On-Demand), Spot Instances, Reserved Instances, and Savings Plans.</p>
</li>
<li><p><strong>Use Cases</strong>: Hosting web apps, running databases, batch data processing, machine learning training, and more.</p>
</li>
<li><p><strong>Integrations</strong>: Manage instances using the AWS CLI, SDKs (Python, Java, Go, etc.), Terraform, and AWS CloudFormation.</p>
</li>
<li><p><strong>Networking</strong>: Instances launch within a Virtual Private Cloud (VPC) for isolated networking and better security.</p>
</li>
<li><p><strong>Storage</strong>: Attach persistent EBS volumes for long-term storage, or use ephemeral storage tied to the instance lifecycle.</p>
</li>
</ul>
<hr />
<h2 id="heading-bonus-tips-for-beginners">🎯 Bonus Tips for Beginners</h2>
<p><strong>SSH Key Permissions:</strong><br />If you're connecting from Linux or macOS, you must run <code>chmod 400 your-key.pem</code> before using SSH. This secures your private key and prevents SSH connection errors.</p>
<p><strong>Elastic IPs (EIP):</strong><br />If you stop and start your instance, your public IP address may change. To keep a static public IP, allocate and associate an Elastic IP.</p>
<p><strong>Instance Termination Reminder:</strong><br />Always terminate unused instances when you're done to avoid unexpected AWS charges.</p>
<hr />
<h2 id="heading-before-you-begin">🛠️ Before You Begin</h2>
<p>Make sure you have:</p>
<ul>
<li><p>An AWS account (Free Tier eligible works great!)</p>
</li>
<li><p>Access to a terminal or command line (Linux, macOS, or Windows)</p>
</li>
</ul>
<hr />
<h2 id="heading-how-to-launch-ec2-instance-from-the-aws-console">How to Launch EC2 Instance from the AWS Console</h2>
<ol>
<li><p><strong>Log into AWS Management Console</strong></p>
<ul>
<li>Go to <a target="_blank" href="https://aws.amazon.com/console/">https://aws.amazon.com/console/</a></li>
</ul>
</li>
<li><p><strong>Navigate to EC2 Service</strong></p>
<ul>
<li><p>In the "Find Services" bar, search for <strong>EC2</strong> and click it.</p>
</li>
<li><p><strong>Tip:</strong><br />  Check the AWS Region at the top right of the Console. Choose a region close to you for better performance and lower latency.</p>
</li>
</ul>
</li>
<li><p><strong>Click "Launch Instance"</strong></p>
<ul>
<li>Under the EC2 Dashboard, click the <strong>Launch Instance</strong> button.</li>
</ul>
</li>
<li><p><strong>Name your instance</strong></p>
<ul>
<li>Provide a simple name like: <code>my-first-instance</code> (this is optional but recommended).</li>
</ul>
</li>
<li><p><strong>Select an Amazon Machine Image (AMI)</strong></p>
<ul>
<li>Choose <strong>Amazon Linux 2 AMI (Free Tier eligible)</strong>.</li>
</ul>
</li>
<li><p><strong>Choose an Instance Type</strong></p>
<ul>
<li>Choose <strong>t2.micro</strong> (Free Tier eligible). If t2.micro is unavailable in your region, select <strong>t3.micro</strong> instead.</li>
</ul>
</li>
<li><p><strong>Create or Select a Key Pair</strong></p>
<ul>
<li><p>If you don't have one:</p>
<ul>
<li><p>Click <strong>Create new key pair</strong>.</p>
</li>
<li><p>Key pair type: <strong>RSA</strong>.</p>
</li>
<li><p>Private key format: <strong>.pem</strong> (for Linux/macOS) or <code>.ppk</code> (for Windows).</p>
</li>
<li><p>Download the <code>.pem</code> file and <strong>store it securely</strong>.</p>
</li>
</ul>
</li>
<li><p>If you already have one, just select it.</p>
</li>
<li><p><strong>Important:</strong><br />  Always create or use a Key Pair. If you proceed without a Key Pair, you won't be able to connect to your instance using SSH.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Network Settings</strong></p>
<ul>
<li><p>Create a new Security Group.</p>
</li>
<li><p>Allow <strong>SSH (port 22)</strong> from <strong>your IP address</strong>.</p>
</li>
<li><p>(Optional) Allow <strong>HTTP (port 80)</strong> if you plan to serve a web app.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Storage</strong></p>
<ul>
<li><p>Default 8 GB General Purpose SSD (gp2) is fine for testing.</p>
</li>
<li><p>You can increase if needed (but keep it Free Tier if testing).</p>
</li>
</ul>
</li>
<li><p><strong>Review and Launch</strong></p>
<ul>
<li><p>Review all settings.</p>
</li>
<li><p>Click <strong>Launch Instance</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>View Instance</strong></p>
<ul>
<li>After a few seconds, click <strong>View Instances</strong> to see your new EC2 running!</li>
</ul>
</li>
<li><p><strong>Connect to Your Instance:</strong></p>
<ul>
<li><p>Select your running instance.</p>
</li>
<li><p>Click <strong>Connect</strong>.</p>
</li>
<li><p>Follow the on-screen SSH connection instructions to securely access your EC2 server.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-how-to-launch-an-ec2-instance-using-aws-cli">🚀 How to Launch an EC2 Instance Using AWS CLI</h2>
<p>Launching an EC2 instance from the AWS Command Line Interface (CLI) gives you faster automation and scripting abilities — perfect for DevOps workflows.</p>
<hr />
<h3 id="heading-before-you-begin-1">📋 Before You Begin</h3>
<p>Make sure you have the following ready:</p>
<ul>
<li><p><strong>AWS CLI installed</strong><br />  Check by running:</p>
<pre><code class="lang-bash">  aws --version
</code></pre>
</li>
<li><p><strong>AWS credentials configured</strong><br />  Set them up using:</p>
</li>
<li><pre><code class="lang-bash">      aws configure
</code></pre>
<p>  (You will need your Access Key ID, Secret Access Key, and a default AWS region.)</p>
</li>
<li><p><strong>A Key Pair created</strong></p>
<ul>
<li><p>You need a Key Pair (<code>.pem</code> file) already created in the region where you are launching the instance.</p>
</li>
<li><p>If you don't have one, you can create it via AWS Console or CLI.</p>
</li>
</ul>
</li>
<li><p><strong>A Security Group created</strong></p>
<ul>
<li>Security Group must allow inbound SSH traffic (port 22) at minimum.</li>
</ul>
</li>
</ul>
<p><strong>An AMI ID</strong></p>
<ul>
<li><p>For example, the Amazon Linux 2 AMI ID for <code>us-east-1</code> is commonly <code>ami-0c2b8ca1dad447f8a</code>.</p>
</li>
<li><p>You can find the latest AMI ID by running:</p>
</li>
</ul>
<pre><code class="lang-bash">    aws ec2 describe-images --owners amazon --filters <span class="hljs-string">"Name=name,Values=amzn2-ami-hvm-*-x86_64-gp2"</span> --query <span class="hljs-string">'Images[*].[ImageId,Name]'</span> --output text
</code></pre>
<h3 id="heading-launch-ec2-instance-command">🛠️ Launch EC2 Instance Command</h3>
<pre><code class="lang-bash">    aws ec2 run-instances \
      --image-id ami-0c2b8ca1dad447f8a \
      --count 1 \
      --instance-type t2.micro \
      --key-name YourKeyPairName \
      --security-groups YourSecurityGroupName
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Parameter</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>--image-id</code></td><td>The AMI ID of the image you want to use (ex: Amazon Linux 2)</td></tr>
<tr>
<td><code>--count</code></td><td>Number of instances to launch (usually 1 for testing)</td></tr>
<tr>
<td><code>--instance-type</code></td><td>Instance size/type (t2.micro for Free Tier)</td></tr>
<tr>
<td><code>--key-name</code></td><td>The name of your Key Pair for SSH access</td></tr>
<tr>
<td><code>--security-groups</code></td><td>Name of the Security Group allowing access</td></tr>
</tbody>
</table>
</div><p><strong>⚡ How to Check if the Instance was Created</strong></p>
<p>After running the <code>run-instances</code> command, you can verify the instance is running:</p>
<pre><code class="lang-bash">    aws ec2 describe-instances --query <span class="hljs-string">'Reservations[*].Instances[*].[InstanceId,State.Name,PublicIpAddress]'</span> --output table
</code></pre>
<p>This will show:</p>
<ul>
<li><p>Instance ID</p>
</li>
<li><p>Instance State (pending/running)</p>
</li>
<li><p>Public IP Address</p>
</li>
</ul>
<p>Once your instance shows a "running" state and a public IP address, it's ready for connection. 🎉</p>
<p>You can now securely SSH into your instance using your Key Pair and start working with your first cloud server.</p>
<blockquote>
<p><strong>Reminder:</strong><br />After testing, remember to terminate your instance to avoid unnecessary AWS charges!</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Launching My Tech Blog 🚀]]></title><description><![CDATA[Welcome to NextGenTechPicks!
I'm Manny — a DevOps Engineer and Cloud Enthusiast passionate about AWS, Terraform, backend development, and building scalable cloud infrastructure.
I started this blog to share:

Cloud engineering tutorials — practical g...]]></description><link>https://nextgentechpicks.com/launching-my-tech-blog</link><guid isPermaLink="true">https://nextgentechpicks.com/launching-my-tech-blog</guid><category><![CDATA[Announcement]]></category><category><![CDATA[new_blog]]></category><category><![CDATA[personal]]></category><category><![CDATA[first post]]></category><category><![CDATA[techblog]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Manny]]></dc:creator><pubDate>Sun, 20 Apr 2025 05:27:41 GMT</pubDate><content:encoded><![CDATA[<p>Welcome to <strong>NextGenTechPicks</strong>!</p>
<p>I'm Manny — a DevOps Engineer and Cloud Enthusiast passionate about AWS, Terraform, backend development, and building scalable cloud infrastructure.</p>
<p>I started this blog to share:</p>
<ul>
<li><p><strong>Cloud engineering tutorials</strong> — practical guides on AWS, Terraform, Kubernetes, and more.</p>
</li>
<li><p><strong>Backend development projects</strong> — Python APIs, automation scripts, and real-world builds.</p>
</li>
<li><p><strong>Tech product reviews and top picks</strong> — honest recommendations for tools that help developers and engineers level up, In the office, at home or on the go.</p>
</li>
<li><p><strong>Lessons learned</strong> — real-world insights from my own projects and experiences.</p>
</li>
</ul>
<hr />
<h2 id="heading-why-i-created-this-blog">🎯 Why I Created This Blog</h2>
<p>I believe that learning, building, and sharing are the foundations of growing as a developer, cloud engineer, and creator.</p>
<p>As I continue to grow in my career and personal projects, I want to create a space to:</p>
<ul>
<li><p>Document my journey</p>
</li>
<li><p>Help others learn faster</p>
</li>
<li><p>Share tools and products that genuinely make a difference</p>
</li>
</ul>
<p>Whether you're starting your cloud journey, looking for backend tips, or searching for the best gear to boost your workflow — you're in the right place!</p>
<hr />
<h2 id="heading-what-to-expect">🔥 What to Expect</h2>
<p>I'll be posting:</p>
<ul>
<li><p>Step-by-step tutorials</p>
</li>
<li><p>Automation guides</p>
</li>
<li><p>Product reviews</p>
</li>
<li><p>Practical tips for developers, cloud engineers, and tech enthusiasts</p>
</li>
</ul>
<p>Stay tuned — the best is yet to come! 🚀</p>
<hr />
<p>Thanks for visiting,<br /><strong>- Manny</strong></p>
<hr />
<p>If you enjoyed this post, stay tuned for tutorials, product reviews, and real-world lessons coming soon!</p>
]]></content:encoded></item></channel></rss>