A futuristic robot with a digital network background symbolizes innovation and modern technology. html

Web Scraping for E-Commerce What I Wish I Knew explained

Why E-Commerce Web Scraping Matters

Let's face it: running an e-commerce business is competitive. To stay ahead, you need to understand your market inside and out. That's where web scraping comes in. Imagine having a tool that could automatically gather information about your competitors' prices, product details, and even their inventory levels. That's the power of e-commerce scraping.

Web scraping, at its core, is a technique for extracting data from websites. Think of it as a digital copy-and-paste that's done automatically, at scale. It's like having an army of virtual assistants constantly monitoring the internet for information that matters to you. This isn't just about saving time; it's about gaining a serious competitive advantage.

Here are some specific ways e-commerce web scraping can boost your business:

  • Price Monitoring: Track competitor pricing in real-time. Adjust your prices dynamically to stay competitive and maximize profit margins. No more manually checking websites every day!
  • Product Monitoring: See what new products competitors are launching, what features they're highlighting, and how they're positioning their offerings. This gives you valuable insights for your own product development and marketing strategies.
  • Inventory Management: Monitor competitor stock levels to identify potential shortages or overstocking situations. This can inform your own inventory decisions and help you avoid missed sales or unnecessary holding costs.
  • Market Trend Identification: Scrape data from multiple sources to identify emerging trends, popular products, and changing customer preferences. This helps you stay ahead of the curve and capitalize on new opportunities. Think about using data scraping services to augment your knowledge of market trends.
  • Competitive Intelligence: Gather comprehensive data about your competitors' strategies, including their pricing, product offerings, marketing campaigns, and customer reviews. This gives you a holistic view of the competitive landscape.
  • Lead Generation and LinkedIn Scraping: Discover potential partners and customers by scraping data from LinkedIn and other relevant websites. Build targeted lists of contacts for your sales and marketing efforts.
  • Catalog Clean-up: Maintaining a product catalog with accurate information and images can be a chore. Web scraping can help you keep product details up-to-date, verify information, and identify discrepancies.
  • Deal Alerts: Stay up-to-date on the latest deals and promotions offered by your competitors. This allows you to react quickly and offer competitive deals to attract customers.

Essentially, e-commerce web scraping empowers you with ecommerce insights that would otherwise be difficult or impossible to obtain manually. It's a key component of any modern e-commerce business intelligence strategy. And, with automated data extraction, you can streamline your workflows and focus on strategic decision-making.

Understanding the Basics of Web Scraping

So, how does web scraping actually work? It's simpler than you might think. Here's a breakdown:

  1. The Scraper: This is the software or tool that does the work. It could be a simple script you write yourself, a dedicated web scraping library in a programming language like Python, or a data scraping services.
  2. The Target Website: This is the website you want to extract data from.
  3. The Process: The scraper sends a request to the target website, just like your web browser does when you visit a page. The website responds with the HTML code of the page.
  4. Parsing the HTML: The scraper then parses the HTML code, looking for specific elements that contain the data you want to extract. This is often done using techniques like XPath or CSS selectors.
  5. Extracting the Data: Once the desired elements are identified, the scraper extracts the data from them.
  6. Storing the Data: Finally, the scraper stores the extracted data in a structured format, such as a CSV file, a database, or a spreadsheet.

Think of it like this: you (the scraper) ask a website (the target website) for its recipe (the HTML code). You then read the recipe carefully (parsing the HTML) and find the ingredients you need (extracting the data). Finally, you write down the list of ingredients (storing the data). That, in a nutshell, is web scraping.

There are different approaches to web scraping, each with its own pros and cons:

  • Basic HTTP Requests: This involves sending simple HTTP requests to the target website and parsing the HTML manually. This is a good option for simple scraping tasks, but it can be tedious and error-prone for more complex websites.
  • HTML Parsing Libraries: Libraries like Beautiful Soup in Python make it easier to parse HTML code and extract data. This is a popular option for intermediate scraping tasks.
  • Web Scraping Frameworks: Frameworks like Scrapy in Python provide a more structured and scalable approach to web scraping. This is a good option for large-scale scraping projects.
  • Headless Browsers: Headless browsers like Puppeteer and Selenium allow you to simulate a real web browser, including executing JavaScript code. This is essential for scraping websites that rely heavily on JavaScript to render their content. Data scraping services often rely on headless browsers.
  • APIs: Some websites offer APIs (Application Programming Interfaces) that allow you to access their data in a structured format. This is the preferred method for data extraction, as it's more reliable and less prone to errors than scraping HTML. However, not all websites offer APIs.

A Simple Web Scraping Example with Python and Pandas

Let's walk through a simple web scraping example using Python and the Pandas library. This example will scrape the title of a webpage.

Prerequisites:

  • Python 3 installed
  • Libraries: `requests`, `beautifulsoup4`, and `pandas`. You can install these using pip: `pip install requests beautifulsoup4 pandas`

Here's the Python code:


import requests
from bs4 import BeautifulSoup
import pandas as pd

# URL of the webpage you want to scrape
url = "https://www.justmetrically.com"

# Send an HTTP request to the URL
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Parse the HTML content using BeautifulSoup
    soup = BeautifulSoup(response.content, "html.parser")

    # Extract the title of the webpage
    title = soup.title.text

    # Create a Pandas DataFrame to store the data
    data = {'Title': [title]}
    df = pd.DataFrame(data)

    # Print the DataFrame
    print(df)

    # You can also save the data to a CSV file
    # df.to_csv("webpage_data.csv", index=False)

else:
    print(f"Error: Could not retrieve webpage. Status code: {response.status_code}")

Explanation:

  1. Import Libraries: The code imports the necessary libraries: `requests` for sending HTTP requests, `BeautifulSoup` for parsing HTML, and `pandas` for data manipulation and storage.
  2. Define URL: The `url` variable stores the URL of the webpage you want to scrape.
  3. Send HTTP Request: The `requests.get(url)` function sends an HTTP request to the specified URL. The response object contains the server's response to the request.
  4. Check Status Code: The `response.status_code` attribute contains the HTTP status code of the response. A status code of 200 indicates that the request was successful.
  5. Parse HTML: If the request was successful, the code parses the HTML content of the response using BeautifulSoup. The `BeautifulSoup(response.content, "html.parser")` function creates a BeautifulSoup object from the HTML content.
  6. Extract Title: The `soup.title.text` attribute extracts the text content of the `` tag in the HTML document.</li> <li><b>Create Pandas DataFrame:</b> The code creates a Pandas DataFrame to store the extracted data. The `data` dictionary contains the data, with the key 'Title' mapping to a list containing the extracted title.</li> <li><b>Print DataFrame:</b> The `print(df)` function prints the DataFrame to the console.</li> <li><b>Save to CSV (Optional):</b> The `df.to_csv("webpage_data.csv", index=False)` function saves the DataFrame to a CSV file named "webpage_data.csv". The `index=False` argument prevents Pandas from writing the DataFrame index to the CSV file.</li> <li><b>Error Handling:</b> If the HTTP request fails (status code is not 200), the code prints an error message to the console.</li> </ol> <p>This is a very basic example, but it demonstrates the fundamental principles of web scraping. You can adapt this code to extract other data from web pages by modifying the HTML parsing logic. Remember to inspect the HTML structure of the target webpage to identify the elements that contain the data you want to extract.</p> <p>This simple example can be built upon to do more advanced scraping tasks like <a href="https://www.justmetrically.com/login?view=sign-up">price monitoring</a> or <a href="https://www.justmetrically.com/login?view=sign-up">product monitoring</a>.</p> <h2>Legal and Ethical Considerations</h2> <p>Before you start scraping, it's crucial to understand the legal and ethical implications. Not all websites allow web scraping, and some may have specific rules you need to follow.</p> <p>Here are some key considerations:</p> <ul> <li><b>Robots.txt:</b> This file, located at the root of a website (e.g., <code>https://www.example.com/robots.txt</code>), specifies which parts of the website should not be accessed by web robots (including scrapers). Always check this file before scraping to see if there are any restrictions. Disregarding robots.txt is generally considered unethical and could lead to legal issues.</li> <li><b>Terms of Service (ToS):</b> Most websites have a Terms of Service agreement that outlines the rules for using their website. This may include restrictions on web scraping. Review the ToS carefully to ensure that your scraping activities are permitted.</li> <li><b>Rate Limiting:</b> Avoid overwhelming the target website with requests. Implement rate limiting in your scraper to send requests at a reasonable pace. This helps prevent the website from being overloaded and potentially crashing.</li> <li><b>Data Usage:</b> Be mindful of how you use the scraped data. Don't use it for illegal or unethical purposes, such as spreading misinformation or violating privacy laws.</li> <li><b>Respect Copyright:</b> Respect the copyright of the content you scrape. Don't reproduce or distribute copyrighted material without permission.</li> </ul> <p>In general, it's best to err on the side of caution and respect the website's wishes. If you're unsure whether scraping is permitted, consider contacting the website owner directly to ask for permission. Remember, even if you <a href="https://www.justmetrically.com/login?view=sign-up">scrape data without coding</a>, these considerations still apply.</p> <p>Consider also the impact of <a href="https://www.justmetrically.com/login?view=sign-up">news scraping</a>. While generally permissible, ensure you attribute sources properly and don't violate copyright laws.</p> <h2>Getting Started: A Quick Checklist</h2> <p>Ready to dive into e-commerce web scraping? Here's a quick checklist to get you started:</p> <ul> <li><b>Define Your Goals:</b> What specific data do you need to extract? What business problems are you trying to solve?</li> <li><b>Choose Your Tools:</b> Select the right tools for the job, whether it's a simple Python script, a web scraping framework, or a <a href="https://www.justmetrically.com/login?view=sign-up">data scraping services</a>.</li> <li><b>Identify Target Websites:</b> Choose the websites that contain the data you need.</li> <li><b>Inspect the HTML:</b> Examine the HTML structure of the target websites to identify the elements that contain the data you want to extract.</li> <li><b>Write Your Scraper:</b> Write the code to extract the data from the target websites.</li> <li><b>Test and Refine:</b> Test your scraper thoroughly to ensure that it's working correctly. Refine your code as needed to improve accuracy and efficiency.</li> <li><b>Implement Rate Limiting:</b> Implement rate limiting to avoid overwhelming the target website.</li> <li><b>Store the Data:</b> Store the extracted data in a structured format, such as a CSV file, a database, or a spreadsheet.</li> <li><b>Monitor Your Scraper:</b> Monitor your scraper regularly to ensure that it's still working correctly. Websites can change their HTML structure, which can break your scraper.</li> <li><b>Stay Legal and Ethical:</b> Always respect the website's robots.txt file and Terms of Service.</li> </ul> <p>Web scraping can provide valuable <a href="https://www.justmetrically.com/login?view=sign-up">data reports</a> to inform your <a href="https://www.justmetrically.com/login?view=sign-up">business intelligence</a> strategy.</p> <h2>Alternatives to Coding: No-Code Scraping Solutions</h2> <p>If you're not comfortable with coding, don't worry! There are several no-code web scraping tools available that make it easy to extract data from websites without writing any code. These tools typically offer a visual interface where you can point and click to select the data you want to scrape.</p> <p>Some popular no-code web scraping tools include:</p> <ul> <li><b>Octoparse:</b> A powerful cloud-based web scraping platform with a visual interface and advanced features like automatic data detection and scheduled scraping.</li> <li><b>ParseHub:</b> Another popular no-code web scraping tool that allows you to extract data from dynamic websites that rely on JavaScript.</li> <li><b>Webharvy:</b> A desktop-based web scraping tool with a user-friendly interface and support for various data formats.</li> <li><b>Apify:</b> A cloud-based platform that offers a wide range of pre-built web scraping actors and the ability to build your own custom scrapers.</li> </ul> <p>These tools often provide features like:</p> <ul> <li><b>Visual Interface:</b> A user-friendly interface for selecting the data you want to scrape.</li> <li><b>Automatic Data Detection:</b> The ability to automatically detect data fields on a webpage.</li> <li><b>Scheduled Scraping:</b> The ability to schedule scraping tasks to run automatically on a regular basis.</li> <li><b>Data Export:</b> The ability to export the scraped data in various formats, such as CSV, Excel, or JSON.</li> <li><b>Cloud-Based:</b> Many of these tools are cloud-based, which means you don't need to install any software on your computer.</li> </ul> <p>Using a no-code web scraping tool can be a great way to <a href="https://www.justmetrically.com/login?view=sign-up">scrape data without coding</a> and get started with e-commerce data extraction quickly and easily.</p> <h2>Beyond Price and Product: Expanding Your Scraping Horizons</h2> <p>While <a href="https://www.justmetrically.com/login?view=sign-up">price monitoring</a> and <a href="https://www.justmetrically.com/login?view=sign-up">product monitoring</a> are common uses of e-commerce web scraping, the possibilities extend far beyond these applications. Consider these additional use cases:</p> <ul> <li><b>Customer Review Analysis:</b> Scrape customer reviews from e-commerce websites to understand customer sentiment and identify areas for improvement in your products or services.</li> <li><b>Social Media Monitoring:</b> Scrape social media platforms to track mentions of your brand, your competitors, or your industry. This can provide valuable insights into customer opinions and market trends.</li> <li><b>Supply Chain Monitoring:</b> Scrape supplier websites to track pricing, availability, and lead times for raw materials and components. This can help you optimize your supply chain and reduce costs.</li> <li><b>Job Board Scraping:</b> Scrape job boards to identify potential candidates for open positions in your company.</li> <li><b>Real Estate Scraping:</b> Scrape real estate websites to track property prices, availability, and other relevant data.</li> <li><b>Sentiment Analysis:</b> Integrate scraping with sentiment analysis tools to automatically analyze the emotional tone of text data, such as customer reviews or social media posts.</li> </ul> <p>The key is to think creatively about how web scraping can help you gather the information you need to make better business decisions. With the right tools and techniques, you can unlock a wealth of valuable data that can give you a significant competitive advantage.</p> <p>Whether you're interested in <a href="https://www.justmetrically.com/login?view=sign-up">screen scraping</a> for product details or more complex <a href="https://www.justmetrically.com/login?view=sign-up">automated data extraction</a> for <a href="https://www.justmetrically.com/login?view=sign-up">inventory management</a>, the right approach is key. Consider how <a href="https://www.justmetrically.com/login?view=sign-up">linkedin scraping</a> might improve your sales team's reach.</p> <p>In today's digital landscape, <a href="https://www.justmetrically.com/login?view=sign-up">ecommerce scraping</a> provides essential insights into <a href="https://www.justmetrically.com/login?view=sign-up">market trends</a> and strengthens your overall <a href="https://www.justmetrically.com/login?view=sign-up">competitive intelligence</a>.</p> <p>Ready to take your e-commerce business to the next level? <a href="https://www.justmetrically.com/login?view=sign-up">Sign up</a> and start leveraging the power of web scraping today!</p> <p>Contact us: <a href="mailto:info@justmetrically.com">info@justmetrically.com</a></p> <p>#WebScraping #ECommerce #DataExtraction #PriceMonitoring #CompetitiveIntelligence #Python #DataAnalysis #MarketTrends #BusinessIntelligence #DataScrapingServices</p> <h2>Related posts</h2> <ul> <li><a href="/post/web-scraping-for-ecommerce-isn-t-scary">Web Scraping for Ecommerce Isn't Scary</a></li> <li><a href="/post/e-commerce-scraping-that-actually-works-guide">E-Commerce Scraping That Actually Works (guide)</a></li> <li><a href="/post/web-scraping-for-e-commerce-what-i-learned-2025">Web scraping for e-commerce what I learned (2025)</a></li> <li><a href="/post/e-commerce-scraping-without-the-headache">E-commerce Scraping Without the Headache</a></li> <li><a href="/post/ecommerce-scraping-my-simple-setup">Ecommerce Scraping: My Simple Setup</a></li> </ul> </div> <hr> <h3 class="mb-3">Comments</h3> <p class="login-message">Please <a href="/login" class="login-link">log in</a> to add a comment.</p> </article> <!-- Sticky quote widget --> <aside class="col-12 col-lg-4 order-2 order-lg-2 lg-sticky"> <div class="fixed-quote-widget"> <h2>Get A Best Quote</h2> <form id="quoteForm"> <div class="input-row mt-2"> <input type="text" name="name" placeholder="Name" required /> <input type="email" name="email" placeholder="Email" required /> </div> <div class="input-row"> <input type="tel" name="phone" placeholder="Phone" required /> <input type="text" name="subject" placeholder="Subject" required /> </div> <textarea name="message" placeholder="Message" required></textarea> <button type="submit">SEND MESSAGE</button> <div id="quoteSuccess">Thank you! Your inquiry has been submitted.</div> </form> </div> </aside> </div> </div> <script> document.addEventListener("DOMContentLoaded", function () { const form = document.getElementById("quoteForm"); const successMsg = document.getElementById("quoteSuccess"); form.addEventListener("submit", async function (e) { e.preventDefault(); const formData = new FormData(form); const data = new URLSearchParams(); for (const pair of formData) { data.append(pair[0], pair[1]); } try { const response = await fetch("/contact", { method: "POST", headers: { 'Accept': 'application/json' }, body: data }); if (response.ok) { form.reset(); successMsg.style.display = "block"; } else { alert("There was an error submitting your inquiry. Please try again."); } } catch (err) { alert("There was an error submitting your inquiry. Please try again."); } }); }); </script> <section class="section latest-news" id="blog"> <div class="container" style="padding-left:50px;"> <div class="row justify-content-center"> <div class="col-md-8 col-lg-6 text-center"> <div class="section-heading"> <!-- Heading --> <h2 class="section-title"> Read our <span class="orange-txt">latest blogs</span> </h2> <!-- Subheading --> </div> </div> </div> <!-- / .row --> <div class="row justify-content-center"> <div class="col-lg-4 col-md-6"> <div class="blog-box"> <div class="blog-img-box"> <img src="https://images.pexels.com/photos/7988079/pexels-photo-7988079.jpeg?auto=compress&cs=tinysrgb&h=650&w=940" alt class="img-fluid blog-img"> </div> <div class="single-blog"> <div class="blog-content"> <h6>December 16, 2025</h6> <a href="/post/simple-ways-to-track-prices-with-web-scraping"> <h3 class="card-title">Simple Ways to Track Prices with Web Scraping</h3> </a> <p>Simple Ways to Track Prices with Web Scraping</p> <a href="/post/simple-ways-to-track-prices-with-web-scraping" class="read-more">Read More</a> </div> </div> </div> </div> <div class="col-lg-4 col-md-6"> <div class="blog-box"> <div class="blog-img-box"> <img src="https://images.pexels.com/photos/39284/macbook-apple-imac-computer-39284.jpeg?auto=compress&cs=tinysrgb&h=650&w=940" alt class="img-fluid blog-img"> </div> <div class="single-blog"> <div class="blog-content"> <h6>December 15, 2025</h6> <a href="/post/simple-ways-to-track-prices-with-web-scraping"> <h3 class="card-title">Simple Ways to Track Prices with Web Scraping</h3> </a> <p>Simple Ways to Track Prices with Web Scraping</p> <a href="/post/simple-ways-to-track-prices-with-web-scraping" class="read-more">Read More</a> </div> </div> </div> </div> <div class="col-lg-4 col-md-6"> <div class="blog-box"> <div class="blog-img-box"> <img src="https://images.pexels.com/photos/1089438/pexels-photo-1089438.jpeg?auto=compress&cs=tinysrgb&h=650&w=940" alt class="img-fluid blog-img"> </div> <div class="single-blog"> <div class="blog-content"> <h6>December 15, 2025</h6> <a href="/post/simple-ways-to-track-prices-with-web-scraping"> <h3 class="card-title">Simple Ways to Track Prices with Web Scraping</h3> </a> <p>Simple Ways to Track Prices with Web Scraping</p> <a href="/post/simple-ways-to-track-prices-with-web-scraping" class="read-more">Read More</a> </div> </div> </div> </div> </div> </div> </section> </main> <style> :root{ --primary:#e85b00; --secondary:#88ab8e; --bg:#ffffff; --text:#1f1f1f; --footer-bg:#0f1110; /* deep neutral for contrast */ --footer-fg:#e9f1ec; /* soft white/greenish tint */ --footer-muted:rgba(233,241,236,0.7); --footer-border:rgba(255,255,255,0.08); --focus-ring: 2px solid var(--primary); } /* Smoothness for your flipster bits you already had */ .flipster--flat .flipster__container, .flipster__item, .flipster__item__content{ transition: all 400ms ease-in-out !important; } /* FOOTER */ #footer{ position: relative; background: radial-gradient(1200px 500px at 10% -10%, rgba(136,171,142,0.15), transparent 60%), radial-gradient(800px 400px at 90% -20%, rgba(254,102,0,0.12), transparent 60%), var(--footer-bg); color: var(--footer-fg); } #footer .footer-accent{ position:absolute; inset:0 0 auto 0; height:4px; background: linear-gradient(90deg, var(--primary), var(--secondary)); } #footer .container{ padding-top: 56px; padding-bottom: 24px; } /* Headings */ #footer .footer-widget h3{ font-size: 0.95rem; text-transform: uppercase; letter-spacing: .08em; font-weight: 700; margin-bottom: 14px; color:#fff; } /* Brand block */ #footer .brand-wrap{ display:flex; flex-direction:column; gap:12px; } #footer .brand-wrap .tagline{ color: var(--footer-muted); line-height:1.6; margin: 0; } #footer .logo{ width: 220px; height:auto; display:block; filter: drop-shadow(0 4px 18px rgba(0,0,0,.25)); } /* Link lists */ #footer .footer-links, #footer .list-unstyled{ list-style: none; padding:0; margin:0; } #footer .footer-links li{ margin: 8px 0; } #footer a{ color: var(--footer-fg); text-decoration: none; opacity: .9; transition: transform .18s ease, opacity .18s ease, color .18s ease, background-color .18s ease; outline: none; } #footer a:hover{ opacity:1; color: var(--secondary); } #footer a:focus-visible{ outline: var(--focus-ring); outline-offset: 2px; border-radius: 6px; } /* Socials */ #footer .socials{ display:flex; flex-direction:column; gap:10px; } #footer .socials a{ display:flex; align-items:center; gap:10px; padding:8px 12px; border:1px solid var(--footer-border); border-radius: 12px; background: rgba(255,255,255,0.03); } #footer .socials a i{ width:18px; text-align:center; } #footer .socials a:hover{ transform: translateY(-2px); background: rgba(136,171,142,0.10); border-color: rgba(136,171,142,0.25); } /* Divider + bottom row */ #footer .footer-divider{ margin: 28px 0 18px; border-top:1px solid var(--footer-border); } #footer .footer-copy{ color: var(--footer-muted); margin:0; font-size:.95rem; } #footer .footer-copy a{ color:#fff; font-weight:600; } #footer .footer-copy a:hover{ color: var(--primary); } /* Responsive tweaks */ @media (max-width: 991.98px){ #footer .brand-col{ margin-bottom: 18px; } } @media (max-width: 575.98px){ #footer .container{ padding-top: 44px; } #footer .socials{ flex-direction:row; flex-wrap:wrap; } } </style> <footer id="footer" aria-label="Site footer"> <div class="footer-accent" aria-hidden="true"></div> <div class="container"> <div class="row justify-content-start footer"> <!-- Brand / Tagline --> <div class="col-lg-4 col-sm-12 brand-col"> <div class="footer-widget brand-wrap"> <img src="/static/logo-cropped.png" class="logo" width="220" height="60" alt="JustMetrically – AI Content & Reporting"> <p class="tagline"><strong>Delivering quality reports and helping businesses excel</strong> — that’s Metrically’s commitment.</p> </div> </div> <!-- Account --> <div class="col-lg-3 ml-lg-auto col-sm-6"> <div class="footer-widget"> <h3>Account</h3> <nav aria-label="Account links"> <ul class="footer-links"> <li><a href="#!">Terms & Conditions</a></li> <li><a href="#!">Privacy Policy</a></li> <li><a href="#!">Help & Support</a></li> </ul> </nav> </div> </div> <!-- About --> <div class="col-lg-2 col-sm-6"> <div class="footer-widget"> <h3>About</h3> <nav aria-label="About links"> <ul class="footer-links"> <li><a href="/posts">Blogs</a></li> <li><a href="/service">Services</a></li> <li><a href="/pricing">Pricing</a></li> <li><a href="/contact">Contact</a></li> </ul> </nav> </div> </div> <!-- Socials --> <div class="col-lg-3 col-sm-12"> <div class="footer-widget"> <h3>Connect</h3> <div class="socials"> <a href="https://www.facebook.com/justmetrically/" aria-label="Facebook — JustMetrically"> <i class="fab fa-facebook-f" aria-hidden="true"></i> Facebook </a> <a href="https://www.linkedin.com/company/justmetrically/" aria-label="LinkedIn — JustMetrically"> <i class="fab fa-linkedin" aria-hidden="true"></i> LinkedIn </a> <a href="https://www.youtube.com/channel/UCx9qVW8VF0LmTi4OF2F8YdA" aria-label="YouTube — JustMetrically"> <i class="fab fa-youtube" aria-hidden="true"></i> YouTube </a> </div> </div> </div> </div> <hr class="footer-divider"> <div class="row align-items-center"> <div class="col-lg-12 d-flex justify-content-between flex-wrap gap-2"> <p class> © <script>document.write(new Date().getFullYear())</script> • Designed & Developed by <a href="#" class="brand-link">JustMetrically</a> </p> </div> </div> </div> </footer> <!-- Page Scroll to Top --> <a id="scroll-to-top" class="scroll-to-top js-scroll-trigger" href="#top-header"> <i class="fa fa-angle-up"></i> </a> <!-- Essential Scripts =====================================--> <script src="/static/plugins/slick-carousel/slick/slick.min.js"></script> <script src="https://unpkg.com/aos@2.3.1/dist/aos.js"></script> <script> AOS.init(); </script> <script src="/static/js/script.js"></script> </body> </html>