Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
About Article
Analyze Data
Archive
Best Practices
Better Outputs
Blog
Code Optimization
Code Quality
Command Line
Course
Daily tips
Dashboard
Data Analysis & Manipulation
Data Engineer
Data Visualization
DataFrame
Delta Lake
DevOps
DuckDB
Environment Management
Feature Engineer
Git
Jupyter Notebook
LLM
LLM Tools
Machine Learning
Machine Learning & AI
Machine Learning Tools
Manage Data
MLOps
Natural Language Processing
Newsletter Archive
NumPy
Pandas
Polars
PySpark
Python Helpers
Python Tips
Python Utilities
Scrape Data
SQL
Testing
Time Series
Tools
Visualization
Visualization & Reporting
Workflow & Automation
Workflow Automation

vector-database

Auto-created tag for vector-database

Hacker News Semantic Search: Production RAG with CloudQuery and Postgres

Table of Contents

Introduction
Introduction to CloudQuery
Install CloudQuery and Prepare PostgreSQL
Sync Hacker News to PostgreSQL
Add pgvector Embeddings
Semantic Search with LangChain Postgres
Other CloudQuery Features
What You’ve Built

Introduction
Building RAG pipelines typically involves moving data from APIs to vector stores, which can be complicated and diverts AI engineers from their core work of building intelligent applications. Teams spend weeks debugging connection timeouts and API failures instead of improving model performance.
CloudQuery eliminates this complexity so AI teams can spend time building intelligent applications instead of debugging data pipelines. The same RAG pipeline that takes weeks to build and maintain with custom code becomes a 10-line YAML configuration that runs reliably in production.
This guide demonstrates how to replace custom Python scripts with CloudQuery’s declarative YAML configuration to build production-ready RAG pipelines with pgvector integration.

Stay Current with CodeCut
Actionable Python tips, curated for busy data pros. Skim in under 2 minutes, three times a week.

.codecut-subscribe-form .codecut-input {
background: #2F2D2E !important;
border: 1px solid #72BEFA !important;
color: #FFFFFF !important;
}
.codecut-subscribe-form .codecut-input::placeholder {
color: #999999 !important;
}
.codecut-subscribe-form .codecut-subscribe-btn {
background: #72BEFA !important;
color: #2F2D2E !important;
}
.codecut-subscribe-form .codecut-subscribe-btn:hover {
background: #5aa8e8 !important;
}

.codecut-subscribe-form {
max-width: 650px;
display: flex;
flex-direction: column;
gap: 8px;
}

.codecut-input {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background: #FFFFFF;
border-radius: 8px !important;
padding: 8px 12px;
font-family: ‘Comfortaa’, sans-serif !important;
font-size: 14px !important;
color: #333333;
border: none !important;
outline: none;
width: 100%;
box-sizing: border-box;
}

input[type=”email”].codecut-input {
border-radius: 8px !important;
}

.codecut-input::placeholder {
color: #666666;
}

.codecut-email-row {
display: flex;
align-items: stretch;
height: 36px;
gap: 8px;
}

.codecut-email-row .codecut-input {
flex: 1;
}

.codecut-subscribe-btn {
background: #72BEFA;
color: #2F2D2E;
border: none;
border-radius: 8px;
padding: 8px 14px;
font-family: ‘Comfortaa’, sans-serif;
font-size: 14px;
font-weight: 500;
cursor: pointer;
text-decoration: none;
display: flex;
align-items: center;
justify-content: center;
transition: background 0.3s ease;
}

.codecut-subscribe-btn:hover {
background: #5aa8e8;
}

.codecut-subscribe-btn:disabled {
background: #999;
cursor: not-allowed;
}

.codecut-message {
font-family: ‘Comfortaa’, sans-serif;
font-size: 12px;
padding: 8px;
border-radius: 6px;
display: none;
}

.codecut-message.success {
background: #d4edda;
color: #155724;
display: block;
}

/* Mobile responsive */
@media (max-width: 480px) {
.codecut-email-row {
flex-direction: column;
height: auto;
gap: 8px;
}

.codecut-input {
border-radius: 8px;
height: 36px;
}

.codecut-subscribe-btn {
width: 100%;
text-align: center;
border-radius: 8px;
height: 36px;
}
}

Subscribe

Key Takeaways
Here’s what you’ll learn:

Configure CloudQuery to sync Hacker News stories and generate OpenAI embeddings in a single YAML workflow
Set up pgvector extension in PostgreSQL for storing 1536-dimensional vectors with automatic indexing
Query semantic similarities using LangChain Postgres similarity_search across embedded content chunks
Leverage CloudQuery’s write modes, batching, and time-based incremental syncing for production deployments

Introduction to CloudQuery
CloudQuery operates as a high-performance data movement framework with intelligent batching, state persistence, and automatic error recovery. Its plugin ecosystem includes 100+ source connectors and supports advanced features like pgvector integration for AI applications.
Key features:

10x Faster Development: Configure complex data pipelines in minutes with pre-built connectors for AWS, GCP, Azure, and 100+ SaaS platforms instead of months of custom code
Production Data Management: Sync state persistence with resume capability, incremental processing for delta detection, and automatic schema evolution eliminate manual maintenance
Built-in Data Governance: Native transformers for column obfuscation, PII removal, and data cleaning ensure compliance without custom code
Optimized Resource Usage: Intelligent batching (100MB default), connection pooling, and adaptive scheduling maximize database performance while respecting API limits
Built for Modern AI: Native pgvector integration with automated text splitting, embedding generation, and semantic indexing for production RAG applications

Install CloudQuery and Prepare PostgreSQL
To install CloudQuery, run the following command:
# macOS
brew install cloudquery/tap/cloudquery

# or download a release (Linux example)
curl -L https://github.com/cloudquery/cloudquery/releases/download/cli-v6.29.2/cloudquery_linux_amd64 -o cloudquery
chmod +x cloudquery

For other installation methods (Windows, Docker, or package managers), visit the CloudQuery installation guide.
Create a database that will be used for this tutorial:
createdb cloudquery

Export the credentials that will be used for the later steps:
export POSTGRESQL_CONNECTION_STRING="postgresql://user:pass@localhost:5432/database"
export OPENAI_API_KEY=sk-…

Here are how to get the credentials:

PostgreSQL: Replace user:pass@localhost:5432/database with your actual database connection details
OpenAI API Key: Get your API key from the OpenAI platform

Sync Hacker News to PostgreSQL
To sync Hacker News to PostgreSQL, start by authenticating with CloudQuery Hub:
cloudquery login

Then run the following command to create a YAML file that syncs Hacker News to PostgreSQL:
cloudquery init –source hackernews –destination postgresql

This command will create a YAML file called hackernews_to_postgresql.yaml:
kind: source
spec:
name: "hackernews"
path: "cloudquery/hackernews"
registry: "cloudquery"
version: "v3.8.2"
tables: ["*"]
backend_options:
table_name: "cq_state_hackernews"
connection: "@@plugins.postgresql.connection"
destinations:
– "postgresql"
spec:
item_concurrency: 100
start_time: 3 hours ago

kind: destination
spec:
name: "postgresql"
path: "cloudquery/postgresql"
registry: "cloudquery"
version: "v8.12.1"
write_mode: "overwrite-delete-stale"
spec:
connection_string: "${POSTGRESQL_CONNECTION_STRING}"

We can now sync the Hacker News data to PostgreSQL by running the following command:
cloudquery sync hackernews_to_postgresql.yaml

Output:
Loading spec(s) from hackernews_to_postgresql.yaml
Starting sync for: hackernews (cloudquery/hackernews@v3.8.2) -> [postgresql (cloudquery/postgresql@v8.12.1)]
Sync completed successfully. Resources: 9168, Errors: 0, Warnings: 0, Time: 26s

Let’s check if the data was ingested successfully by connecting to the CloudQuery database:
psql -U postgres -d cloudquery

Then inspect the available tables:
\dt

Schema | Name | Type | Owner
——–+—————————–+——-+————
public | cq_state_hackernews | table | khuyentran
public | hackernews_items | table | khuyentran

CloudQuery automatically creates two tables:

cq_state_hackernews: tracks sync state for incremental updates
hackernews_items: contains the actual Hacker News data

View the schema of the hackernews_items table:
\d hackernews_items

Output:
Table "public.hackernews_items"
Column | Type | Collation | Nullable | Default
—————–+—————————–+———–+———-+———
_cq_sync_time | timestamp without time zone | | |
_cq_source_name | text | | |
_cq_id | uuid | | not null |
_cq_parent_id | uuid | | |
id | bigint | | not null |
deleted | boolean | | |
type | text | | |
by | text | | |
time | timestamp without time zone | | |
text | text | | |
dead | boolean | | |
parent | bigint | | |
kids | bigint[] | | |
url | text | | |
score | bigint | | |
title | text | | |
parts | bigint[] | | |
descendants | bigint | | |

Check the first 5 rows with type story:
SELECT id, type, score, title FROM hackernews_items WHERE type='story' LIMIT 5;

id | type | score | title
———-+——-+——-+———————————————————————–
45316982 | story | 3 | Ask HN: Why don't Americans hire human assistants for everyday tasks?
45317015 | story | 2 | Streaming Live: San Francisco Low Riders Festival
45316989 | story | 1 | The Collapse of Coliving Operators, and Why the Solution Is Upstream
45317092 | story | 0 |
45317108 | story | 1 | Patrick McGovern was the maven of ancient tipples

Add pgvector Embeddings
pgvector is a PostgreSQL extension that adds vector similarity search capabilities, perfect for RAG applications. For a complete guide on implementing RAG with pgvector, see our semantic search tutorial.
CloudQuery provides built-in pgvector support, automatically generating embeddings alongside your data sync. First, enable the pgvector extension in PostgreSQL:
psql -d cloudquery -c "CREATE EXTENSION IF NOT EXISTS vector;"

Now add the pgvector configuration to the hackernews_to_postgresql.yaml:
kind: source
spec:
name: "hackernews"
path: "cloudquery/hackernews"
registry: "cloudquery"
version: "v3.8.2"
tables: ["*"]
backend_options:
table_name: "cq_state_hackernews"
connection: "@@plugins.postgresql.connection"
destinations:
– "postgresql"
spec:
item_concurrency: 100
start_time: 3 hours ago

kind: destination
spec:
name: "postgresql"
path: "cloudquery/postgresql"
registry: "cloudquery"
version: "v8.12.1"
write_mode: "overwrite-delete-stale"
spec:
connection_string: "${POSTGRESQL_CONNECTION_STRING}"

pgvector_config:
tables:
– source_table_name: "hackernews_items"
target_table_name: "hackernews_items_embeddings"
embed_columns: ["title"]
metadata_columns: ["id", "type", "url", "by"]
filter_condition: "type = 'story' AND title IS NOT NULL AND title != ''"
text_splitter:
recursive_text:
chunk_size: 200
chunk_overlap: 0
openai_embedding:
api_key: "${OPENAI_API_KEY}"
model_name: "text-embedding-3-small"
dimensions: 1536

Explanation of pgvector configuration:

source_table_name: The original CloudQuery table to read data from
target_table_name: New table where embeddings and metadata will be stored
embed_columns: Which columns to convert into vector embeddings (only non-empty text is processed)
metadata_columns: Additional columns to preserve alongside embeddings for filtering and context
filter_condition: SQL WHERE clause to only embed specific rows (stories with non-empty titles)
chunk_size: Maximum characters per text chunk (short titles become single chunks)
chunk_overlap: Overlapping characters between chunks to preserve context across boundaries
model_name: OpenAI embedding model (text-embedding-3-small offers 5x cost savings vs ada-002)

Since we’ve already synced the data, let’s clean up the existing tables before running the sync again:
psql -U postgres -d cloudquery -c 'DROP TABLE IF EXISTS cq_state_hackernews'

Run the enhanced sync:
cloudquery sync hackernews_to_postgresql.yaml

CloudQuery now produces an embeddings table alongside the source data:
psql -U postgres -d cloudquery

List the available tables:
\dt

Output:
Schema | Name | Type | Owner
——–+—————————–+——-+————
public | cq_state_hackernews | table | khuyentran
public | hackernews_items | table | khuyentran
public | hackernews_items_embeddings | table | khuyentran

Inspect the embeddings table structure:
— Check table structure and vector dimensions
\d hackernews_items_embeddings;

Output:
cloudquery=# \d hackernews_items_embeddings;
Table "public.hackernews_items_embeddings"
Column | Type | Collation | Nullable | Default
—————+—————————–+———–+———-+———
_cq_id | uuid | | |
id | bigint | | not null |
type | text | | |
url | text | | |
_cq_sync_time | timestamp without time zone | | |
chunk | text | | |
embedding | vector(1536) | | |

Sample a few records to see the data:
— Sample a few records to see the data
SELECT id, type, url, by, chunk
FROM hackernews_items_embeddings
LIMIT 5;

Output:
type | by | chunk
——-+—————–+——————————————————–
story | Kaibeezy | Autonomous Airport Ground Support Equipment
story | drankl | Dining across the divide: 'We disagreed on…
story | wjSgoWPm5bWAhXB | World's First AI-designed viruses a step towards…
story | Brajeshwar | Banned in the U.S. and Europe, Huawei aims for…
story | danielfalbo | Zig Z-ant: run ML models on microcontrollers

Check for NULL embeddings:
SELECT COUNT(*) as rows_without_embeddings
FROM hackernews_items_embeddings
WHERE embedding IS NULL;

Output:
rows_without_embeddings
———————-
0

Great! There is no NULL embeddings.
Check the chunk sizes and content distribution:
— Check chunk sizes and content distribution
SELECT
type,
COUNT(*) as count,
AVG(LENGTH(chunk)) as avg_chunk_length,
MIN(LENGTH(chunk)) as min_chunk_length,
MAX(LENGTH(chunk)) as max_chunk_length
FROM hackernews_items_embeddings
GROUP BY type;

Output:
type | count | avg_chunk_length | min_chunk_length | max_chunk_length
——-+——-+———————+——————+——————
story | 695 | 52.1366906474820144 | 4 | 86

The 52-character average chunk length confirms that most Hacker News titles fit comfortably within the configured 200-character chunk_size limit, validating the text splitter settings.
Semantic Search with LangChain Postgres
Now that we have embeddings stored in PostgreSQL, let’s use LangChain Postgres to handle vector operations.
Start with installing the necessary packages:
pip install langchain-postgres langchain-openai psycopg[binary] greenlet

Next, set up the embedding service that will convert text queries into vector representations for similarity matching. In this example, we’ll use the OpenAI embedding model.
import os
from langchain_postgres import PGEngine, PGVectorStore
from langchain_openai import OpenAIEmbeddings

# Initialize embeddings (requires OPENAI_API_KEY environment variable)
embeddings = OpenAIEmbeddings(
model="text-embedding-3-small",
openai_api_key=os.getenv("OPENAI_API_KEY")
)

Connect to the hackernews_items_embeddings table generated by CloudQuery containing the pre-computed embeddings.
# Initialize connection engine
CONNECTION_STRING = "postgresql+asyncpg://khuyentran@localhost:5432/cloudquery"
engine = PGEngine.from_connection_string(url=CONNECTION_STRING)

# Connect to existing CloudQuery vector store table
vectorstore = PGVectorStore.create_sync(
engine=engine,
table_name="hackernews_items_embeddings",
embedding_service=embeddings,
content_column="chunk",
embedding_column="embedding",
id_column="id", # Map to CloudQuery's id column
metadata_columns=["type", "url", "by"], # Include story metadata
)

Finally, we can query the vector store to find semantically similar content.
# Semantic search for Apple-related stories
docs = vectorstore.similarity_search("Apple technology news", k=4)
for doc in docs:
print(f"Title: {doc.page_content}")

Output:
Title: Apple takes control of all core chips in iPhoneAir with new arch prioritizing AI
Title: Apple used AI to uncover new blood pressure notification
Title: Apple Losing Talent to OpenAI
Title: Standard iPhone 17 Outperforms Expectations as Apple Ramps Up Manufacturing

The vector search successfully identifies relevant Apple content beyond exact keyword matches, capturing stories about AI development, talent acquisition, and product development that share semantic meaning with the query.
Other CloudQuery Features
Let’s explore some of the other features of CloudQuery to enhance your pipeline.
Multi-Destination Support
CloudQuery can route the same source data to multiple destinations in a single sync operation:
destinations: ["postgresql", "bigquery", "s3"]

This capability means you can simultaneously:

Store operational data in PostgreSQL for real-time queries
Load analytical data into BigQuery for data warehousing
Archive raw data to S3 for compliance and backup

Write Modes
CloudQuery provides different write modes to control how data updates are handled:
Smart Incremental Updates:
Smart incremental updates is the default write mode and is recommended for most use cases. It updates existing records and removes any data that’s no longer present in the source. This is perfect for maintaining accurate, up-to-date datasets where items can be deleted or modified.
write_mode: "overwrite-delete-stale"

Append-Only Mode:
Append-only mode only adds new data without modifying existing records. Ideal for time-series data, logs, or when you want to preserve historical versions of records.
write_mode: "append"

Selective Overwrite:
Selective overwrite mode replaces existing records with matching primary keys but doesn’t remove stale data. Useful when you know the source data is complete but want to keep orphaned records.
write_mode: "overwrite"

Performance Batching
You can optimize memory usage and database performance by configuring the batch size and timeout:
spec:
batch_size: 1000 # Records per batch
batch_size_bytes: 4194304 # 4MB memory limit
batch_timeout: "20s" # Max wait between writes

Details of the parameters:

batch_size: Number of records grouped together before writing.
batch_size_bytes: Maximum memory size per batch in bytes.
batch_timeout: Time limit before writing partial batches.

Retry Handling
The max_retries parameter ensures reliable data delivery by automatically retrying failed write operations a specified number of times before marking them as permanently failed.
spec:
max_retries: 5 # Number of retry attempts

Time-Based Incremental Syncing
CloudQuery’s start_time configuration prevents unnecessary data reprocessing by only syncing records created after a specified timestamp, dramatically reducing sync time and resource usage:
spec:
start_time: "7 days ago" # Only sync recent data
# Alternative formats:
# start_time: "2024-01-15T10:00:00Z" # Specific timestamp
# start_time: "1 hour ago" # Relative time

See the source plugin configuration and destination plugin configuration documentation for all available options.
What You’ve Built
In this tutorial, you’ve created a production-ready RAG pipeline that:

Syncs live data from Hacker News with automatic retry and state persistence
Generates vector embeddings using OpenAI’s latest models
Enables semantic search across thousands of posts and comments
Scales to handle any CloudQuery-supported data source (100+ connectors)

All with zero custom ETL code and enterprise-grade reliability.
Next steps:

Explore the Hacker News source documentation for advanced filtering options (top, best, ask HN, etc.)
Add additional text sources (GitHub issues, Typeform surveys, Airtable notes) to the same pipeline
Schedule CloudQuery with cron, Airflow, or Kubernetes for continuous refresh
Integrate with LangChain PGVector or LlamaIndex PGVector in production RAG systems

Related Tutorials

Foundational Concepts: Implement Semantic Search in Postgres Using pgvector and Ollama for comprehensive pgvector fundamentals
Production Quality: Build Production-Ready RAG Systems with MLflow Quality Metrics for RAG evaluation and monitoring
Scaling Beyond PostgreSQL: Natural-Language Queries for Spark: Using LangChain to Run SQL on DataFrames for distributed processing with LangChain

📚 Want to go deeper? Learning new techniques is the easy part. Knowing how to structure, test, and deploy them is what separates side projects from real work. My book shows you how to build data science projects that actually make it to production. Get the book →

Stay Current with CodeCut
Actionable Python tips, curated for busy data pros. Skim in under 2 minutes, three times a week.

.codecut-subscribe-form .codecut-input {
background: #2F2D2E !important;
border: 1px solid #72BEFA !important;
color: #FFFFFF !important;
}
.codecut-subscribe-form .codecut-input::placeholder {
color: #999999 !important;
}
.codecut-subscribe-form .codecut-subscribe-btn {
background: #72BEFA !important;
color: #2F2D2E !important;
}
.codecut-subscribe-form .codecut-subscribe-btn:hover {
background: #5aa8e8 !important;
}

.codecut-subscribe-form {
max-width: 650px;
display: flex;
flex-direction: column;
gap: 8px;
}

.codecut-input {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background: #FFFFFF;
border-radius: 8px !important;
padding: 8px 12px;
font-family: ‘Comfortaa’, sans-serif !important;
font-size: 14px !important;
color: #333333;
border: none !important;
outline: none;
width: 100%;
box-sizing: border-box;
}

input[type=”email”].codecut-input {
border-radius: 8px !important;
}

.codecut-input::placeholder {
color: #666666;
}

.codecut-email-row {
display: flex;
align-items: stretch;
height: 36px;
gap: 8px;
}

.codecut-email-row .codecut-input {
flex: 1;
}

.codecut-subscribe-btn {
background: #72BEFA;
color: #2F2D2E;
border: none;
border-radius: 8px;
padding: 8px 14px;
font-family: ‘Comfortaa’, sans-serif;
font-size: 14px;
font-weight: 500;
cursor: pointer;
text-decoration: none;
display: flex;
align-items: center;
justify-content: center;
transition: background 0.3s ease;
}

.codecut-subscribe-btn:hover {
background: #5aa8e8;
}

.codecut-subscribe-btn:disabled {
background: #999;
cursor: not-allowed;
}

.codecut-message {
font-family: ‘Comfortaa’, sans-serif;
font-size: 12px;
padding: 8px;
border-radius: 6px;
display: none;
}

.codecut-message.success {
background: #d4edda;
color: #155724;
display: block;
}

/* Mobile responsive */
@media (max-width: 480px) {
.codecut-email-row {
flex-direction: column;
height: auto;
gap: 8px;
}

.codecut-input {
border-radius: 8px;
height: 36px;
}

.codecut-subscribe-btn {
width: 100%;
text-align: center;
border-radius: 8px;
height: 36px;
}
}

Subscribe

Hacker News Semantic Search: Production RAG with CloudQuery and Postgres Read More »

Build a Complete RAG System with 5 Open-Source Tools

Table of Contents

Introduction to RAG Systems
Document Ingestion with MarkItDown
Intelligent Chunking with LangChain
Creating Searchable Embeddings with SentenceTransformers
Building Your Knowledge Database with ChromaDB
Enhanced Answer Generation with Open-Source LLMs
Building a Simple Application with Gradio
Conclusion

Introduction
Have you ever spent 30 minutes searching through Slack threads, email attachments, and shared drives just to find that one technical specification your colleague mentioned last week?
It is a common scenario that repeats daily across organizations worldwide. Knowledge workers spend valuable time searching for information that should be instantly accessible, leading to decreased productivity.
Retrieval-Augmented Generation (RAG) systems solve this problem by transforming your documents into an intelligent, queryable knowledge base. Ask questions in natural language and receive instant answers with source citations, eliminating time-consuming manual searches.
In this article, we’ll build a complete RAG pipeline that turns document collections into an AI-powered question-answering system.

Stay Current with CodeCut
Actionable Python tips, curated for busy data pros. Skim in under 2 minutes, three times a week.

.codecut-subscribe-form .codecut-input {
background: #2F2D2E !important;
border: 1px solid #72BEFA !important;
color: #FFFFFF !important;
}
.codecut-subscribe-form .codecut-input::placeholder {
color: #999999 !important;
}
.codecut-subscribe-form .codecut-subscribe-btn {
background: #72BEFA !important;
color: #2F2D2E !important;
}
.codecut-subscribe-form .codecut-subscribe-btn:hover {
background: #5aa8e8 !important;
}

.codecut-subscribe-form {
max-width: 650px;
display: flex;
flex-direction: column;
gap: 8px;
}

.codecut-input {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background: #FFFFFF;
border-radius: 8px !important;
padding: 8px 12px;
font-family: ‘Comfortaa’, sans-serif !important;
font-size: 14px !important;
color: #333333;
border: none !important;
outline: none;
width: 100%;
box-sizing: border-box;
}

input[type=”email”].codecut-input {
border-radius: 8px !important;
}

.codecut-input::placeholder {
color: #666666;
}

.codecut-email-row {
display: flex;
align-items: stretch;
height: 36px;
gap: 8px;
}

.codecut-email-row .codecut-input {
flex: 1;
}

.codecut-subscribe-btn {
background: #72BEFA;
color: #2F2D2E;
border: none;
border-radius: 8px;
padding: 8px 14px;
font-family: ‘Comfortaa’, sans-serif;
font-size: 14px;
font-weight: 500;
cursor: pointer;
text-decoration: none;
display: flex;
align-items: center;
justify-content: center;
transition: background 0.3s ease;
}

.codecut-subscribe-btn:hover {
background: #5aa8e8;
}

.codecut-subscribe-btn:disabled {
background: #999;
cursor: not-allowed;
}

.codecut-message {
font-family: ‘Comfortaa’, sans-serif;
font-size: 12px;
padding: 8px;
border-radius: 6px;
display: none;
}

.codecut-message.success {
background: #d4edda;
color: #155724;
display: block;
}

/* Mobile responsive */
@media (max-width: 480px) {
.codecut-email-row {
flex-direction: column;
height: auto;
gap: 8px;
}

.codecut-input {
border-radius: 8px;
height: 36px;
}

.codecut-subscribe-btn {
width: 100%;
text-align: center;
border-radius: 8px;
height: 36px;
}
}

Subscribe

Key Takeaways
Here’s what you’ll learn:

Convert documents with MarkItDown in 3 lines
Chunk text intelligently using LangChain RecursiveCharacterTextSplitter
Generate embeddings locally with SentenceTransformers model
Store vectors in ChromaDB persistent database
Generate answers using Ollama local LLMs
Deploy web interface with Gradio streaming

💻 Get the Code: The complete source code and Jupyter notebook for this tutorial are available on GitHub. Clone it to follow along!

Introduction to RAG Systems
RAG (Retrieval-Augmented Generation) combines document retrieval with language generation to create intelligent Q&A systems. Instead of relying solely on training data, RAG systems search through your documents to find relevant information, then use that context to generate accurate, source-backed responses.
Environment Setup
Install the required libraries for building your RAG pipeline:
pip install markitdown[pdf] sentence-transformers langchain-text-splitters chromadb gradio langchain-ollama ollama

These libraries provide:

markitdown: Microsoft’s document conversion tool that transforms PDFs, Word docs, and other formats into clean markdown
sentence-transformers: Local embedding generation for converting text into searchable vectors
langchain-text-splitters: Intelligent text chunking that preserves semantic meaning
chromadb: Self-hosted vector database for storing and querying document embeddings
gradio: Web interface builder for creating user-friendly Q&A applications
langchain-ollama: LangChain integration for local LLM inference

Install Ollama and download a model:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2

Next, create a project directory structure to organize your files:
mkdir processed_docs documents

These directories organize your project:

processed_docs: Stores converted markdown files
documents: Contains original source files (PDFs, Word docs, etc.)

Create these directories in your current working path with appropriate read/write permissions.
Dataset Setup: Python Technical Documentation
To demonstrate the RAG pipeline, we’ll use “Think Python” by Allen Downey, a comprehensive programming guide freely available under Creative Commons.
We’ll download the Python guide and save it in the documents directory.
import requests
from pathlib import Path

# Get the file path
output_folder = "documents"
filename = "think_python_guide.pdf"
url = "https://greenteapress.com/thinkpython/thinkpython.pdf"
file_path = Path(output_folder) / filename

def download_file(url: str, file_path: Path):
response = requests.get(url, stream=True, timeout=30)
response.raise_for_status()
file_path.write_bytes(response.content)

# Download the file if it doesn't exist
if not file_path.exists():
download_file(
url=url,
file_path=file_path,
)

Next, let’s convert this PDF into a format that our RAG system can process and search through.
Document Ingestion with MarkItDown
RAG systems need documents in a structured format that AI models can understand and process effectively.
MarkItDown solves this challenge by converting any document format into clean markdown while preserving the original structure and meaning.
Converting Your Python Guide
Start by converting the Python guide to understand how MarkItDown works:
from markitdown import MarkItDown

# Initialize the converter
md = MarkItDown()

# Convert the Python guide to markdown
result = md.convert(file_path)
python_guide_content = result.text_content

# Display the conversion results
print("First 300 characters:")
print(python_guide_content[:300] + "…")

In this code:

MarkItDown() creates a document converter that handles multiple file formats automatically
convert() processes the PDF and returns a result object containing the extracted text
text_content provides the clean markdown text ready for processing

Output:
First 300 characters:
Think Python

How to Think Like a Computer Scientist

Version 2.0.17

Think Python

How to Think Like a Computer Scientist

Version 2.0.17

Allen Downey

Green Tea Press

Needham, Massachusetts

Copyright © 2012 Allen Downey.

Green Tea Press
9 Washburn Ave
Needham MA 02492

Permission is granted…

MarkItDown automatically detects the PDF format and extracts clean text while preserving the book’s structure, including chapters, sections, and code examples.
Preparing Document for Processing
Now that you understand the basic conversion, let’s prepare the document content for processing. We’ll store the guide’s content with source information for later use in chunking and retrieval:
# Organize the converted document
processed_document = {
'source': file_path,
'content': python_guide_content
}

# Create a list containing our single document for consistency with downstream processing
documents = [processed_document]

# Document is now ready for chunking and embedding
print(f"Document ready: {len(processed_document['content']):,} characters")

Output:
Document ready: 460,251 characters

With our document successfully converted to markdown, the next step is breaking it into smaller, searchable pieces.
Intelligent Chunking with LangChain
AI models can’t process entire documents due to limited context windows. Chunking breaks documents into smaller, searchable pieces while preserving semantic meaning.
Understanding Text Chunking with a Simple Example
Let’s see how text chunking works with a simple document:
from langchain_text_splitters import RecursiveCharacterTextSplitter

# Create a simple example that will be split
sample_text = """
Machine learning transforms data processing. It enables pattern recognition without explicit programming.

Deep learning uses neural networks with multiple layers. These networks discover complex patterns automatically.

Natural language processing combines ML with linguistics. It helps computers understand human language effectively.
"""

# Apply chunking with smaller size to demonstrate splitting
demo_splitter = RecursiveCharacterTextSplitter(
chunk_size=150, # Small size to force splitting
chunk_overlap=30,
separators=["\n\n", "\n", ". ", " ", ""], # Split hierarchy
)

sample_chunks = demo_splitter.split_text(sample_text.strip())

print(f"Original: {len(sample_text.strip())} chars → {len(sample_chunks)} chunks")

# Show chunks
for i, chunk in enumerate(sample_chunks):
print(f"Chunk {i+1}: {chunk}")

Output:
Original: 336 chars → 3 chunks
Chunk 1: Machine learning transforms data processing. It enables pattern recognition without explicit programming.
Chunk 2: Deep learning uses neural networks with multiple layers. These networks discover complex patterns automatically.
Chunk 3: Natural language processing combines ML with linguistics. It helps computers understand human language effectively.

Notice how the text splitter:

Split the 336-character text into 3 chunks, each under the 150-character limit
Applied 30-character overlap between adjacent chunks
Separators prioritize semantic boundaries: paragraphs (\n\n) → sentences (.) → words () → characters

Processing Multiple Documents at Scale
Now let’s a text splitter with larger chunks and apply it to all our converted documents:
# Configure the text splitter with Q&A-optimized settings
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=600, # Optimal chunk size for Q&A scenarios
chunk_overlap=120, # 20% overlap to preserve context
separators=["\n\n", "\n", ". ", " ", ""] # Split hierarchy
)

Next, use the text splitter to process all our documents:
def process_document(doc, text_splitter):
"""Process a single document into chunks."""
doc_chunks = text_splitter.split_text(doc["content"])
return [{"content": chunk, "source": doc["source"]} for chunk in doc_chunks]

# Process all documents and create chunks
all_chunks = []
for doc in documents:
doc_chunks = process_document(doc, text_splitter)
all_chunks.extend(doc_chunks)

Examine how the chunking process distributed content across our documents:
from collections import Counter

source_counts = Counter(chunk["source"] for chunk in all_chunks)
chunk_lengths = [len(chunk["content"]) for chunk in all_chunks]

print(f"Total chunks created: {len(all_chunks)}")
print(f"Chunk length: {min(chunk_lengths)}-{max(chunk_lengths)} characters")
print(f"Source document: {Path(documents[0]['source']).name}")

Output:
Total chunks created: 1007
Chunk length: 68-598 characters
Source document: think_python_guide.pdf

Our text chunks are ready. Next, we’ll transform them into a format that enables intelligent similarity search.
Creating Searchable Embeddings with SentenceTransformers
RAG systems need to understand text meaning, not just match keywords. SentenceTransformers converts your text into numerical vectors that capture semantic relationships, allowing the system to find truly relevant information even when exact words don’t match.
Generate Embeddings
Let’s generate embeddings for our text chunks:
from sentence_transformers import SentenceTransformer

# Load Q&A-optimized embedding model (downloads automatically on first use)
model = SentenceTransformer('multi-qa-mpnet-base-dot-v1')

# Extract documents and create embeddings
documents = [chunk["content"] for chunk in all_chunks]
embeddings = model.encode(documents)

print(f"Embedding generation results:")
print(f" – Embeddings shape: {embeddings.shape}")
print(f" – Vector dimensions: {embeddings.shape[1]}")

In this code:

SentenceTransformer() loads the Q&A-optimized model that converts text to 768-dimensional vectors
multi-qa-mpnet-base-dot-v1 is specifically trained on 215M question-answer pairs for superior Q&A performance
model.encode() transforms all text chunks into numerical embeddings in a single batch operation

The output shows 1007 chunks converted to 768-dimensional vectors:
Embedding generation results:
– Embeddings shape: (1007, 768)
– Vector dimensions: 768

Test Semantic Similarity
Let’s test semantic similarity by querying for Python programming concepts:
# Test how one query finds relevant Python programming content
from sentence_transformers import util

query = "How do you define functions in Python?"
document_chunks = [
"Variables store data values that can be used later in your program.",
"A function is a block of code that performs a specific task when called.",
"Loops allow you to repeat code multiple times efficiently.",
"Functions can accept parameters and return values to the calling code."
]

# Encode query and documents
query_embedding = model.encode(query)
doc_embeddings = model.encode(document_chunks)

Now we’ll calculate similarity scores and rank the results. The util.cos_sim() function computes cosine similarity between vectors, returning values from 0 (no similarity) to 1 (identical meaning):
# Calculate similarities using SentenceTransformers util
similarities = util.cos_sim(query_embedding, doc_embeddings)[0]

# Create ranked results
ranked_results = sorted(
zip(document_chunks, similarities),
key=lambda x: x[1],
reverse=True
)

print(f"Query: '{query}'")
print("Document chunks ranked by relevance:")
for i, (chunk, score) in enumerate(ranked_results, 1):
print(f"{i}. ({score:.3f}): '{chunk}'")

Output:
Query: 'How do you define functions in Python?'
Document chunks ranked by relevance:
1. (0.674): 'A function is a block of code that performs a specific task when called.'
2. (0.607): 'Functions can accept parameters and return values to the calling code.'
3. (0.461): 'Loops allow you to repeat code multiple times efficiently.'
4. (0.448): 'Variables store data values that can be used later in your program.'

The similarity scores demonstrate semantic understanding: function-related chunks achieve high scores (0.7+) while unrelated programming concepts score much lower (0.2-).
Building Your Knowledge Database with ChromaDB
These embeddings demonstrate semantic search capability, but memory storage has scalability limitations. Large vector collections quickly exhaust system resources.
Vector databases provide essential production capabilities:

Persistent storage: Data survives system restarts and crashes
Optimized indexing: Fast similarity search using HNSW algorithms
Memory efficiency: Handles millions of vectors without RAM exhaustion
Concurrent access: Multiple users query simultaneously
Metadata filtering: Search by document properties and attributes

ChromaDB delivers these features with a Python-native API that integrates seamlessly into your existing data pipeline.
Initialize Vector Database
First, we’ll set up the ChromaDB client and create a collection to store our document vectors.
import chromadb

# Create persistent client for data storage
client = chromadb.PersistentClient(path="./chroma_db")

# Create collection for business documents (or get existing)
collection = client.get_or_create_collection(
name="python_guide",
metadata={"description": "Python programming guide"}
)

print(f"Created collection: {collection.name}")
print(f"Collection ID: {collection.id}")

Created collection: python_guide
Collection ID: 42d23900-6c2a-47b0-8253-0a9b6dad4f41

In this code:

PersistentClient(path="./chroma_db") creates a local vector database that persists data to disk
get_or_create_collection() creates a new collection or returns an existing one with the same name

Store Documents with Metadata
Now we’ll store our document chunks with basic metadata in ChromaDB with the add() method.
# Prepare metadata and add documents to collection
metadatas = [{"document": Path(chunk["source"]).name} for chunk in all_chunks]

collection.add(
documents=documents,
embeddings=embeddings.tolist(), # Convert numpy array to list
metadatas=metadatas, # Metadata for each document
ids=[f"doc_{i}" for i in range(len(documents))], # Unique identifiers for each document
)

print(f"Collection count: {collection.count()}")

Output:
Collection count: 1007

The database now contains 1007 searchable document chunks with their vector embeddings. ChromaDB persists this data to disk, enabling instant queries without reprocessing documents on restart.
Query the Knowledge Base
Let’s search the vector database using natural language questions and retrieve relevant document chunks.
def format_query_results(question, query_embedding, documents, metadatas):
"""Format and print the search results with similarity scores"""
from sentence_transformers import util

print(f"Question: {question}\n")

for i, doc in enumerate(documents):
# Calculate accurate similarity using sentence-transformers util
doc_embedding = model.encode([doc])
similarity = util.cos_sim(query_embedding, doc_embedding)[0][0].item()
source = metadatas[i].get("document", "Unknown")

print(f"Result {i+1} (similarity: {similarity:.3f}):")
print(f"Document: {source}")
print(f"Content: {doc[:300]}…")
print()

def query_knowledge_base(question, n_results=2):
"""Query the knowledge base with natural language"""
# Encode the query using our SentenceTransformer model
query_embedding = model.encode([question])

results = collection.query(
query_embeddings=query_embedding.tolist(),
n_results=n_results,
include=["documents", "metadatas", "distances"],
)

# Extract results and format them
documents = results["documents"][0]
metadatas = results["metadatas"][0]

format_query_results(question, query_embedding, documents, metadatas)

In this code:

collection.query() performs vector similarity search using the question text as input
query_texts accepts a list of natural language questions for batch processing
n_results limits the number of most similar documents returned
include specifies which data to return: document text, metadata, and similarity distances

Let’s test the query function with a question:
query_knowledge_base("How do if-else statements work in Python?")

Output:
Question: How do if-else statements work in Python?

Result 1 (similarity: 0.636):
Document: think_python_guide.pdf
Content: 5.6 Chained conditionals

Sometimes there are more than two possibilities and we need more than two branches.
One way to express a computation like that is a chained conditional:

if x < y:
print

elif x > y:

print

x is less than y

x is greater than y

else:

print

x and y are equa…

Result 2 (similarity: 0.605):
Document: think_python_guide.pdf
Content: 5. An unclosed opening operator ((, {, or [) makes Python continue with the next line
as part of the current statement. Generally, an error occurs almost immediately in the
next line.

6. Check for the classic = instead of == inside a conditional.

7. Check the indentation to make sure it lines up the…

The search finds relevant content with strong similarity scores (0.636 and 0.605).
Enhanced Answer Generation with Open-Source LLMs
Vector similarity search retrieves related content, but the results may be scattered across multiple chunks without forming a complete answer.
LLMs solve this by weaving retrieved context into unified responses that directly address user questions.
In this section, we’ll integrate Ollama‘s local LLMs with our vector search to generate coherent answers from retrieved chunks.
Answer Generation Implementation
First, set up the components for LLM-powered answer generation:
from langchain_ollama import OllamaLLM
from langchain.prompts import PromptTemplate

# Initialize the local LLM
llm = OllamaLLM(model="llama3.2:latest", temperature=0.1)

Next, create a focused prompt template for technical documentation queries:
prompt_template = PromptTemplate(
input_variables=["context", "question"],
template="""You are a Python programming expert. Based on the provided documentation, answer the question clearly and accurately.

Documentation:
{context}

Question: {question}

Answer (be specific about syntax, keywords, and provide examples when helpful):"""
)

# Create the processing chain
chain = prompt_template | llm

Create a function to retrieve relevant context given a question:
def retrieve_context(question, n_results=5):
"""Retrieve relevant context using embeddings"""
query_embedding = model.encode([question])
results = collection.query(
query_embeddings=query_embedding.tolist(),
n_results=n_results,
include=["documents", "metadatas", "distances"],
)

documents = results["documents"][0]
context = "\n\n—SECTION—\n\n".join(documents)
return context, documents

def get_llm_answer(question, context):
"""Generate answer using retrieved context"""
answer = chain.invoke(
{
"context": context[:2000],
"question": question,
}
)
return answer

def format_response(question, answer, source_chunks):
"""Format the final response with sources"""
response = f"**Question:** {question}\n\n"
response += f"**Answer:** {answer}\n\n"
response += "**Sources:**\n"

for i, chunk in enumerate(source_chunks[:3], 1):
preview = chunk[:100].replace("\n", " ") + "…"
response += f"{i}. {preview}\n"

return response

def enhanced_query_with_llm(question, n_results=5):
"""Query function combining retrieval with LLM generation"""
context, documents = retrieve_context(question, n_results)
answer = get_llm_answer(question, context)
return format_response(question, answer, documents)

Testing Enhanced Answer Generation
Let’s test the enhanced system with our challenging question:
# Test the enhanced query system
enhanced_response = enhanced_query_with_llm("How do if-else statements work in Python?")
print(enhanced_response)

Output:
**Question:** How do if-else statements work in Python?

**Answer:** If-else statements in Python are used for conditional execution of code. Here's a breakdown of how they work:

**Syntax**

The basic syntax of an if-else statement is as follows:
“`text
if condition:
# code to execute if condition is true
elif condition2:
# code to execute if condition1 is false and condition2 is true
else:
# code to execute if both conditions are false
“`text
**Keywords**

The keywords used in an if-else statement are:

* `if`: used to check a condition
* `elif` (short for "else if"): used to check another condition if the first one is false
* `else`: used to specify code to execute if all conditions are false

**How it works**

Here's how an if-else statement works:

1. The interpreter evaluates the condition inside the `if` block.
2. If the condition is true, the code inside the `if` block is executed.
3. If the condition is false, the interpreter moves on to the next line and checks the condition in the `elif` block.
4. If the condition in the `elif` block is true, the code inside that block is executed.
5. If both conditions are false, the interpreter executes the code inside the `else` block.

**Sources:**
1. 5.6 Chained conditionals Sometimes there are more than two possibilities and we need more than two …
2. 5. An unclosed opening operator ((, {, or [) makes Python continue with the next line as part of the c…
3. if x == y: print else: ’ x and y are equal ’ if x < y: 44 Chapter 5. Conditionals and recur…

Notice how the LLM organizes multiple chunks into logical sections with syntax examples and step-by-step explanations. This transformation turns raw retrieval into actionable programming guidance.
Streaming Interface Implementation
Users now expect the real-time streaming experience from ChatGPT and Claude. Static responses that appear all at once feel outdated and create an impression of poor performance.
Token-by-token streaming bridges this gap by creating the familiar typing effect that signals active processing.
To implement a streaming interface, we’ll use the chain.stream() method to generate tokens one at a time.
def stream_llm_answer(question, context):
"""Stream LLM answer generation token by token"""
for chunk in chain.stream({
"context": context[:2000],
"question": question,
}):
yield getattr(chunk, "content", str(chunk))

Let’s see how streaming works by combining our modular functions:
import time

# Test the streaming functionality
question = "What are Python loops?"
context, documents = retrieve_context(question, n_results=3)

print("Question:", question)
print("Answer: ", end="", flush=True)

# Stream the answer token by token
for token in stream_llm_answer(question, context):
print(token, end="", flush=True)
time.sleep(0.05) # Simulate real-time typing effect

Output:
Question: What are Python loops?
Answer: Python → loops → are → structures → that → repeat → code…

[Each token appears with typing animation]
Final: "Python loops are structures that repeat code blocks."

This creates the familiar ChatGPT-style typing animation where tokens appear progressively.
Building a Simple Application with Gradio
Now that we have a complete RAG system with enhanced answer generation, let’s make it accessible through a web interface.
Your RAG system needs an intuitive interface that non-technical users can access easily. Gradio provides this solution with:

Zero web development required: Create interfaces directly from Python functions
Automatic UI generation: Input fields and buttons generated automatically
Instant deployment: Launch web apps with a single line of code

Interface Function
Let’s create the complete Gradio interface that combines the functions we’ve built into a streaming RAG system:
import gradio as gr

def rag_interface(question):
"""Gradio interface reusing existing format_response function"""
if not question.strip():
yield "Please enter a question."
return

# Use modular retrieval and streaming
context, documents = retrieve_context(question, n_results=5)

response_start = f"**Question:** {question}\n\n**Answer:** "
answer = ""

# Stream the answer progressively
for token in stream_llm_answer(question, context):
answer += token
yield response_start + answer

# Use existing formatting function for final response
yield format_response(question, answer, documents)

Application Setup and Launch
Now, we’ll configure the Gradio web interface with sample questions and launch the application for user access.
# Create Gradio interface with streaming support
demo = gr.Interface(
fn=rag_interface,
inputs=gr.Textbox(
label="Ask a question about Python programming",
placeholder="How do if-else statements work in Python?",
lines=2,
),
outputs=gr.Markdown(label="Answer"),
title="Intelligent Document Q&A System",
description="Ask questions about Python programming concepts and get instant answers with source citations.",
examples=[
"How do if-else statements work in Python?",
"What are the different types of loops in Python?",
"How do you handle errors in Python?",
],
allow_flagging="never",
)

# Launch the interface with queue enabled for streaming
if __name__ == "__main__":
demo.queue().launch(share=True)

In this code:

gr.Interface() creates a clean web application with automatic UI generation
fn specifies the function called when users submit questions (includes streaming output)
inputs/outputs define UI components (textbox for questions, markdown for formatted answers)
examples provides clickable sample questions that demonstrate system capabilities
demo.queue().launch(share=True) enables streaming output and creates both local and public URLs

Running the application produces the following output:
* Running on local URL: http://127.0.0.1:7861
* Running on public URL: https://bb9a9fc06531d49927.gradio.live

Test the interface locally or share the public URL to demonstrate your RAG system’s capabilities.

The public URL expires in 72 hours. For persistent access, deploy to Hugging Face Spaces:
gradio deploy

You now have a complete, streaming-enabled RAG system ready for production use with real-time token generation and source citations.
Conclusion
In this article, we’ve built a complete RAG pipeline that turns your documents into an AI-powered question-answering system.
We’ve used the following tools:

MarkItDown for document conversion
LangChain for text chunking and embedding generation
ChromaDB for vector storage
Ollama for local LLM inference
Gradio for web interface

Since all of these tools are open-source, you can easily deploy this system in your own infrastructure.

📚 For comprehensive production deployment practices including configuration management, logging, and data validation, check out Production-Ready Data Science.

The best way to learn is to build, so go ahead and try it out!
Related Tutorials

Alternative Vector Database: Implement Semantic Search in Postgres Using pgvector and Ollama for PostgreSQL-based vector storage
Advanced Document Processing: Transform Any PDF into Searchable AI Data with Docling for specialized PDF parsing and RAG optimization
LangChain Fundamentals: Run Private AI Workflows with LangChain and Ollama for comprehensive LangChain and Ollama integration guide

📚 Want to go deeper? Learning new techniques is the easy part. Knowing how to structure, test, and deploy them is what separates side projects from real work. My book shows you how to build data science projects that actually make it to production. Get the book →

Stay Current with CodeCut
Actionable Python tips, curated for busy data pros. Skim in under 2 minutes, three times a week.

.codecut-subscribe-form .codecut-input {
background: #2F2D2E !important;
border: 1px solid #72BEFA !important;
color: #FFFFFF !important;
}
.codecut-subscribe-form .codecut-input::placeholder {
color: #999999 !important;
}
.codecut-subscribe-form .codecut-subscribe-btn {
background: #72BEFA !important;
color: #2F2D2E !important;
}
.codecut-subscribe-form .codecut-subscribe-btn:hover {
background: #5aa8e8 !important;
}

.codecut-subscribe-form {
max-width: 650px;
display: flex;
flex-direction: column;
gap: 8px;
}

.codecut-input {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background: #FFFFFF;
border-radius: 8px !important;
padding: 8px 12px;
font-family: ‘Comfortaa’, sans-serif !important;
font-size: 14px !important;
color: #333333;
border: none !important;
outline: none;
width: 100%;
box-sizing: border-box;
}

input[type=”email”].codecut-input {
border-radius: 8px !important;
}

.codecut-input::placeholder {
color: #666666;
}

.codecut-email-row {
display: flex;
align-items: stretch;
height: 36px;
gap: 8px;
}

.codecut-email-row .codecut-input {
flex: 1;
}

.codecut-subscribe-btn {
background: #72BEFA;
color: #2F2D2E;
border: none;
border-radius: 8px;
padding: 8px 14px;
font-family: ‘Comfortaa’, sans-serif;
font-size: 14px;
font-weight: 500;
cursor: pointer;
text-decoration: none;
display: flex;
align-items: center;
justify-content: center;
transition: background 0.3s ease;
}

.codecut-subscribe-btn:hover {
background: #5aa8e8;
}

.codecut-subscribe-btn:disabled {
background: #999;
cursor: not-allowed;
}

.codecut-message {
font-family: ‘Comfortaa’, sans-serif;
font-size: 12px;
padding: 8px;
border-radius: 6px;
display: none;
}

.codecut-message.success {
background: #d4edda;
color: #155724;
display: block;
}

/* Mobile responsive */
@media (max-width: 480px) {
.codecut-email-row {
flex-direction: column;
height: auto;
gap: 8px;
}

.codecut-input {
border-radius: 8px;
height: 36px;
}

.codecut-subscribe-btn {
width: 100%;
text-align: center;
border-radius: 8px;
height: 36px;
}
}

Subscribe

Build a Complete RAG System with 5 Open-Source Tools Read More »

Scroll to Top

Work with Khuyen Tran

Work with Khuyen Tran