Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
About Article
Analyze Data
Archive
Best Practices
Better Outputs
Blog
Code Optimization
Code Quality
Command Line
Daily tips
Dashboard
Data Analysis & Manipulation
Data Engineer
Data Visualization
DataFrame
Delta Lake
DevOps
DuckDB
Environment Management
Feature Engineer
Git
Jupyter Notebook
LLM
LLM
Machine Learning
Machine Learning
Machine Learning & AI
Manage Data
MLOps
Natural Language Processing
NumPy
Pandas
Polars
PySpark
Python Tips
Python Utilities
Python Utilities
Scrape Data
SQL
Testing
Time Series
Tools
Visualization
Visualization & Reporting
Workflow & Automation
Workflow Automation

LLM

Building a Conversational Interface for Elasticsearch Data with Kestra and OpenAI

To create natural language interactions with Elasticsearch data, use Kestra with the OpenAI plugin. This combination transforms structured data into an intuitive, conversational interface.

Key advantages:

Custom data processing and AI prompts for your specific use case

Easy configuration and deployment with Kestra’s YAML-based workflow

Here’s a code example demonstrating this workflow:

id: movie_recommendation_system
namespace: entertainment.movies

inputs:
– id: user_preference
type: STRING
defaults: I like action movies with a bit of comedy

tasks:
– id: search_movies
type: io.kestra.plugin.elasticsearch.Search
connection:
hosts:
– http://localhost:9200/
indexes:
– movies_database
request:
size: 5
query:
bool:
must:
multi_match:
query: "{{ inputs.user_preference }}"
fields: ["title", "description", "genre"]
type: best_fields

– id: format_movie_results
type: io.kestra.plugin.core.debug.Return
format: >
{% for movie in outputs.search_movies.rows %}
Title: {{ movie.title }}
Genre: {{ movie.genre }}
Description: {{ movie.description }}
{% endfor %}

– id: generate_recommendations
type: io.kestra.plugin.openai.ChatCompletion
apiKey: sk-proj-your-OpenAI-API-KEY
model: gpt-4
maxTokens: 500
prompt: |
You're a movie recommendation assistant.
Based on the USER PREFERENCE and the MOVIE RESULTS, suggest 3 movies from the list.
Explain why each movie matches the user's preference.
USER PREFERENCE: {{ inputs.user_preference }}
MOVIE RESULTS: {{ outputs.format_movie_results.value }}

– id: display_recommendations
type: io.kestra.plugin.core.log.Log
message: "{{ outputs.generate_recommendations.choices | jq('.[].message.content') | first }}"

This example shows:

search_movies task uses Elasticsearch Search to query the index based on user preference.

format_movie_results task formats the search results.

generate_recommendations task uses OpenAI ChatCompletion to analyze results and user preference, creating personalized movie recommendations with explanations.

Favorite

Building a Conversational Interface for Elasticsearch Data with Kestra and OpenAI Read More »

RouteLLM: Optimizing LLM Usage with Smart Query Routing

LLM Router is to optimize the use of various language models by directing each query to the most suitable model. RouteLLM is a framework for serving and evaluating LLM routers.

Key Features:✅ Cost Reduction: Potential savings of up to 85%

✅ Performance Maintenance: Retains 95% of GPT-4 benchmark performance

✅ Seamless Integration: Drop-in replacement for OpenAI’s client

✅ Smart Routing: Directs queries based on complexity

Link to RouteLLM.
Favorite

RouteLLM: Optimizing LLM Usage with Smart Query Routing Read More »

Ludwig: Simplify Custom AI Model Development

Ludwig is a low-code framework for building custom AI models.

Key features:

🔧 Build custom models easily with a declarative YAML configuration file.🚀 Experience enhanced training with features like auto-selected batch sizes, distributed training, and parameter fine-tuning.📊 Gain access to hyperparameter optimization, explainability, and rich metric visualizations.

Link to Ludwig.
Favorite

Ludwig: Simplify Custom AI Model Development Read More »

Storm: Leverage Large Language Models for Faster, Reliable Research

Conducting thorough research and creating detailed articles is essential but it requires extensive time to collect, synthesize, and cite accurate information from various sources.

Storm addresses this by using Large Language Models (LLMs) to automate the research process, generate comprehensive reports, and provide proper citations, thus making the task more efficient and dependable.

Link to Storm.
Favorite

Storm: Leverage Large Language Models for Faster, Reliable Research Read More »

Maximize Accuracy and Relevance with External Data and LLMs

Combining external data and an LLM offers the best of both worlds: accuracy and relevance. External data provides up-to-date information, while an LLM can generate text based on input prompts. Together, they enable a system to respond helpfully to a wider range of queries.

Mirascope simplifies this combination with Pythonic code. In the example above, we use an LLM to process natural language prompts and query the database for data.

Link to Mirascope.
Favorite

Maximize Accuracy and Relevance with External Data and LLMs Read More »

Phidata: Enhance LLM with Integrated Memory, Knowledge, and Tools

LLMs have restrictions in terms of context and action-taking capabilities.

Phidata enhances LLMs by integrating three essential components:

Memory: Enables LLMs to have long-term conversations by storing chat history in a database.

Knowledge: Provides LLMs with business context by storing information in a vector database.

Tools: Enable LLMs to take actions like pulling data from an API, sending emails or querying a database.

Link to phidata.
Favorite

Phidata: Enhance LLM with Integrated Memory, Knowledge, and Tools Read More »

Streamline Your AI Development with KitOps Dev Mode

With KitOps Dev Mode, you can easily launch a local development portal for any large language model with just two command lines.

This user-friendly interface enables you to quickly experiment with different prompts and fine-tune the model’s responses, helping you bring AI-powered applications to market faster.

Link to KitOps.
Favorite

Streamline Your AI Development with KitOps Dev Mode Read More »

Mirascope: Extract Structured Data Extraction from LLM Outputs

Large Language Models (LLMs) are powerful at producing human-like text, but their outputs lack structure, which can limit their usefulness in many practical applications that require organized data.

Mirascope offers a solution by enabling the extraction of structured information from LLM outputs reliably.

The following code uses Mirascope to extract meeting details such as topic, date, time, and participants.

Link to Mirascope.
Favorite

Mirascope: Extract Structured Data Extraction from LLM Outputs Read More »

LLLTime: Leveraging Language Models for Zero-Shot Time Series Forecasting

LLLTime provides a method for zero-shot time series forecasting with large language models (LLMs). By transforming numerical data into text, the model predicts future series extensions, resembling natural sentence continuation.

LLMTime can outperform many popular time-series methods without any training on the target dataset.

Link to LLLTime.
Favorite

LLLTime: Leveraging Language Models for Zero-Shot Time Series Forecasting Read More »

0
    0
    Your Cart
    Your cart is empty
    Scroll to Top

    Work with Khuyen Tran

    Work with Khuyen Tran