5 min to read
Creating a custom ChatGPT-like application can be an exciting project for developers and AI enthusiasts alike. This guide will walk you through the process of building an AI-powered chatbot using Deepseek R1 on Ubuntu 24, with a React frontend and deployment on Vercel. We will use a Node.js backend, but Python can also be an option if preferred.
Before you begin, ensure your system meets the following requirements:
Open a terminal and update your package list
sudo apt update && sudo apt upgrade -yOllama is a tool that simplifies running AI models locally. To install it, run
curl -fsSL https://ollama.com/install.sh | shTo download and run the DeepSeek R1 model, execute the following command:
ollama run deepseek-r1:7bTo list all installed models, use
ollama listsudo apt install nodejs npm -ynpm init -ynpm install express body-parser cors axios dotenvindex.js file for your Express serverconst express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const app = express();
app.use(cors());
app.use(bodyParser.json());
app.post('/api/chat', async (req, res) => {
const userMessage = req.body.message;
// Logic to interact with DeepSeek R1 goes here.
// Example placeholder response:
res.json({ response: `You said: ${userMessage}` });
});
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
node index.js
To connect your Node.js server with DeepSeek R1, you will need to implement the logic in the /api/chat endpoint. This typically involves sending the user's message to DeepSeek and receiving a response.Here’s an example of how you might implement this using Axios to make requests to Ollama
const axios = require('axios');
app.post('/api/chat', async (req, res) => {
const userMessage = req.body.message;
try {
const response = await axios.post('http://localhost:8000/deepseek', { prompt: userMessage });
res.json({ response: response.data });
} catch (error) {
console.error(error);
res.status(500).json({ error: 'Failed to communicate with DeepSeek R1' });
}
});
You can test your setup using tools like Postman or curl. Send a POST request to http://localhost:5000/api/chat with a JSON body containing a message
{
"message": "Hello, how are you?"
}
You should receive a response from your Node.js server, which in turn communicates with DeepSeek R1.
npx create-react-app frontend && cd frontendnpm install axiossrc/App.jsimport React, { useState } from 'react';
import axios from 'axios';
function App() {
const [input, setInput] = useState('');
const [messages, setMessages] = useState([]);
const sendMessage = async () => {
const response = await axios.post('http://localhost:5000/api/chat', { message: input });
setMessages([...messages, { text: input, sender: 'user' }, { text: response.data, sender: 'bot' }]);
setInput('');
};
return (
<div>
<h1>Chat with Deepseek R1</h1>
<div>
{messages.map((msg, index) => (
<div key={index} className={msg.sender}>
{msg.text}
</div>
))}
</div>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button onClick={sendMessage}>Send</button>
</div>
);
}
export default App;
npm startnpm install -g vercelvercelTo configure your backend server's domain and IP so that your Vercel-deployed React application can communicate with it, follow these steps:
Aapi.yourdomain.com<Your_Server_IP_Address>localhost with the new domain you've configured (e.g., https://api.yourdomain.com). This ensures that API calls from your frontend are directed to the correct server.Following these steps, you will successfully configure your backend server to communicate with your Vercel-deployed application using a custom domain.
You have successfully built and deployed your own ChatGPT-like application using Deepseek R1 on Ubuntu 24 with a React frontend and Node.js backend! This guide provided a comprehensive overview of each step involved in setting up your environment, building your application, and deploying it online. Explore further enhancements like adding user authentication or integrating more advanced features to improve your chatbot's functionality!
To enable communication between your Vercel app and your backend server, configure your backend to accept requests from the domain where your Vercel app is hosted. Update your API calls in the React code to use the public domain or IP address of your backend instead of localhost.
If you encounter connection issues after deploying your React app, check the following:
Yes, you can run DeepSeek R1 without a GPU; however, performance may be significantly slower compared to running it on a machine with a compatible NVIDIA GPU. For optimal performance, especially with larger models, a GPU is recommended.
Connect with top remote developers instantly. No commitment, no risk.
Tags
Discover our most popular articles and guides
Running Android emulators on low-end PCs—especially those without Virtualization Technology (VT) or a dedicated graphics card—can be a challenge. Many popular emulators rely on hardware acceleration and virtualization to deliver smooth performance.
The demand for Android emulation has soared as users and developers seek flexible ways to run Android apps and games without a physical device. Online Android emulators, accessible directly through a web browser.
Discover the best free iPhone emulators that work online without downloads. Test iOS apps and games directly in your browser.
Top Android emulators optimized for gaming performance. Run mobile games smoothly on PC with these powerful emulators.
The rapid evolution of large language models (LLMs) has brought forth a new generation of open-source AI models that are more powerful, efficient, and versatile than ever.
ApkOnline is a cloud-based Android emulator that allows users to run Android apps and APK files directly from their web browsers, eliminating the need for physical devices or complex software installations.
Choosing the right Android emulator can transform your experience—whether you're a gamer, developer, or just want to run your favorite mobile apps on a bigger screen.
The rapid evolution of large language models (LLMs) has brought forth a new generation of open-source AI models that are more powerful, efficient, and versatile than ever.