This tutorial is designed to guide you through building an AI-powered web application and showcase the potential of AI in everyday web development. Artificial Intelligence (AI) is revolutionizing modern web technology, making it more innovative and responsive. By incorporating AI, developers can enhance user experiences through features like real-time data analysis, personalized content recommendations, and advanced image recognition.

Next.js is a robust Reach framework that enables developers to quickly build server-side rendered and static web applications. It offers excellent performance, scalability, and a seamless developer experience. TensorFlow.js, on the other hand, TensorFlow.js is a Javascript library that allows you to train and run machine learning models directly in the browser.

By combining Next.js and TensorFlow.js, you can create sophisticated web applications that leverage the power of AI without needing extensive backend infrastructure.

By the end of this tutorial, you will have built a fully functional AI-powered web application capable of performing image recognition tasks. You'll gain hands-on experience with Next.js and TensorFlow.js, learning how to integrate machine learning models into a modern web framework.

This tutorial will equip you with the skills to start incorporating AI features into your projects, opening up new possibilities for innovation and user engagement.

Setting Up the Environment

Prerequisites:

Step 1: Setting Up the Project

It's common to place your projects in a Projects directory within your home directory.

and move to the project's folder by running:

cd Projects

Your projects would then be located in paths like:/home/your-username/Projects/my_project (Linux)/Users/your-username/Projects/my_project (Mac)

Windows Using Linux subsystem WSL

Step 2: Installing Next.js

If you haven't installed Next.js yet, you can create a new Next.js project using the following command:

Installing Next.js:

npx create-next-app ai-web-app

Test that the app is working as for now:

npm run dev

You will see the Next.js app on the page http://localhost:3000. If it works, we can proceed.

Installing TensorFlow.js:

npm install @tensorflow/tfjs @tensorflow-models/mobilenet

Project Structure

ai-web-app/
├── node_modules/
├── public/
├── src/
│   ├── pages/
│   │   ├── api/
│   │   │   └── hello.js
│   │   ├── _app.js
│   │   ├── _document.js
│   │   ├── index.js
│   ├── styles/
│   │   ├── globals.css
│   │   ├── Home.module.css
│   ├── utils/
│   │   └── imageProcessing.js
├── .gitignore
├── package.json
├── README.md

So, we have to add the following file:

Erase all the code and add the following ones:

Part 1: Imports and State Initialization

  1. Imports
  1. State Initialization

Part 2: Handling Image Analysis

  1. handleAnalyzeClick Function:

  1. Retrieving the Uploaded Image File:

    const fileInput = document.getElementById("image-upload"); const imageFile = fileInput.files[0];

  1. Checking if an Image File is Uploaded:

    if (!imageFile) { alert("Please upload an image file."); return; }

  1. Loading the Image and Classifying It:

    try { const image = await loadImage(imageFile); const predictions = await model.classify(image); setPredictions(predictions); } catch (error) { console.error('Error analyzing the image:', error); }

  2. try { ... } catch (error) { ... }: The try-catch The block handles any errors during the image loading and classification process.

  3. Loading the Image:

    const image = await loadImage(imageFile);

Classifying the Image:

const predictions = await model.classify(image);

Setting Predictions State:

setPredictions(predictions);

setPredictions(predictions): This updates the predictions State with the new classification results. This triggers a re-render of the component, displaying the predictions to the user.

  1. Handling Errors:

    catch (error) { console.error('Error analyzing the image:', error); }

catch (error) { ... }: This block catches any errors that occur during the try block.console.error('Error analyzing the image:', error);: If an error occurs, it logs the error message to the console for debugging purposes.

Part 3: Loading the TensorFlow Model

  1. Model Loading:

Basic Layout

To begin building our AI-powered web application with Next.js and TensorFlow.js, we'll set up a basic layout using Next.js components. This initial structure will be the foundation for our application's user interface.

Part 4: Rendering the UI

5. Rendering:

JSX Return Statement

1. Fragment Wrapper

return (
    <>
      ...
    </>

<> ... </>: This React Fragment allows multiple elements to be grouped without adding extra nodes to the DOM.

2. Container Div

<div className={styles.container}>
  ...
</div>

<div className={styles.container}> ... </div>: This div wraps the main content of the page and applies styling from the styles.container Class.

3. Head Component

<Head>
  <title>AI-Powered Web App</title>
</Head>

4. Main Content

<main className={styles.main}>
  ...
</main>

<main className={styles.main}> ... </main>: This main element contains the primary content of the page and applies styling from the styles.main class

5. Title and Description

<h1 className={styles.title}>AI-Powered Web Application</h1>
<p className={styles.description}>
  Using Next.js and TensorFlow.js to show some AI model.
</p>

6. Input Area

<div id="input-area">
  <input type="file" className={styles.input} id="image-upload" />
  <button className={styles.button} onClick={handleAnalyzeClick}>
    Analyze Image
  </button>
</div>

7. Ouput Area

<div id="output-area">
  {predictions.length > 0 && (
    <ul>
      {predictions.map((pred, index) => (
        <li key={index}>
          {pred.className}: {(pred.probability * 100).toFixed(2)}%
        </li>
      ))}
    </ul>
  )}
</div>

Edit the Styles for the index.js file in Home.module.css erase all the code, and add the following one:

.container {
  min-height: 100vh;
  padding: 0 0.5rem;
  display: flex;
  flex-direction: column;
  justify-content: center;
  align-items: center;
}
.main {
  padding: 5rem 0;
  flex: 1;
  display: flex;
  flex-direction: column;
  justify-content: center;
  align-items: center;
}
.title {
  margin: 0;
  line-height: 1.15;
  font-size: 4rem;
  text-align: center;
}
.description {
  margin: 4rem 0;
  line-height: 1.5;
  font-size: 1.5rem;
  text-align: center;
}

#ouput-area {
  margin-top: 2rem;
}
.li {
  margin-top: 10px;
  font-size: 20px;
}
.button {
  margin-top: 1rem;
  padding: 0.5rem 1rem;
  font-size: 1rem;
  cursor:pointer;
  background-color: #0070f3;
  color: white;
  border: none;
  border-radius: 5px;
}

.button:hover {
  background-color: #005bb5;
}

Once you have done the previous steps, check to see something like this:

Now, let's work with the brain of the app. imageProcessing.js File:

Part 1: Loading the Model

Function: loadModel

import * as tf from "@tensorflow/tfjs";
import * as mobilenet from "@tensorflow-models/mobilenet";

export async function loadModel() {
  try {
    const model = await mobilenet.load();
    return model;
  } catch (error) {
    console.error("Error loading the model:", error);
    throw error;
  }
}

This Function loads the MobileNet model using TensorFlow.js. Here's a step-by-step explanation:

Part 2: Preprocessing the Image

Function: preprocesImage

export function preprocesImage(image) {
  const tensor = tf.browser
    .fromPixels(image)
    .resizeNearestNeighbor([224, 224]) // MobileNet input size
    .toFloat()
    .expandDims();
  return tensor.div(127.5).sub(1); // Normalize to [-1, 1] range
}

This function preprocesses an image in the format required by MobileNet. Here's a step-by-step explanation:

Part 3: Loading the Image

Function: loadImage

export function loadImage(file) {
  return new Promise((resolve, reject) => {
    const reader = new FileReader();
    reader.onload = (event) => {
      const img = new Image();
      img.src = event.target.result;
      img.onload = () => resolve(img);
    };
    reader.onerror = (error) => reject(error);
    reader.readAsDataURL(file);
  });
}

This Function loads an image file and returns an HTML Image element. Here's a step-by-step explanation:

Now, you can test this final project by uploading images to the project's page and seeing the final results; if you have any problems, please check the provided link to clone the project from Github:

Github repository project

Conclusion

This tutorial taught you how to build an AI-powered web application using Next.js and TensorFlow.js. We covered:

  1. Setting Up the Environment: You installed Next.js and TensorFlow.js and set up your development environment.
  2. Creating the User Interface: You made a simple UI for uploading images and displaying predictions.
  3. Integrating TensorFlow.js: You integrated the MobileNet model to perform image classification directly in the browser.

By combining Next.js and TensorFlow.js, you can create sophisticated web applications that leverage the power of AI, enhancing user experiences with features like image recognition.

Next Steps

To further improve your application, consider exploring these additional features:

Additional Resources

About the Author

Ivan Duarte is a backend developer with experience working freelance. He is passionate about web development and artificial intelligence and enjoys sharing their knowledge through tutorials and articles. Follow me on X, Github, and LinkedIn for more insights and updates.