Andrejus Baranovski

Subscribe to Andrejus Baranovski feed
Blog about Oracle, Full Stack, Machine Learning and Cloud
Updated: 6 hours 57 min ago

FastAPI File Upload and Temporary Directory for Stateless API

Sun, 2024-03-17 09:32
I explain how to handle file upload with FastAPI and how to process the file by using Python temporary directory. Files placed into temporary directory are automatically removed once request completes, this is very convenient for stateless API. 

 

Optimizing Receipt Processing with LlamaIndex and PaddleOCR

Sun, 2024-03-10 14:09
LlamaIndex Text Completion function allows to execute LLM request combining custom data and the question, without using Vector DB. This is very useful when processing output from OCR, it simplifies the RAG pipeline. In this video I explain, how OCR can be combined with LLM to process image documents in Sparrow.

 

LlamaIndex Multimodal with Ollama [Local LLM]

Sun, 2024-03-03 13:03
I describe how to run LlamaIndex Multimodal with local LlaVA LLM through Ollama. Advantage of this approach - you can process image documents with LLM directly, without running through OCR, this should lead to better results. This functionality is integrated as separate LLM agent into Sparrow. 

 

LLM Agents with Sparrow

Mon, 2024-02-26 01:53
I explain new functionality in Sparrow - LLM agents support. This means you can implement independently running agents, and invoke them from CLI or API. This makes it easier to run various LLM related processing within Sparrow. 

 

Extracting Invoice Structured Output with Haystack and Ollama Local LLM

Tue, 2024-02-20 02:49
I implemented Sparrow agent with Haystack structured output functionality to extract invoice data. This runs locally through Ollama, using LLM to retrieve key/value pairs data. 

 

Local LLM RAG Pipelines with Sparrow Plugins [Python Interface]

Sun, 2024-02-04 09:12
There are many tools and frameworks around LLM, evolving and improving daily. I added plugin support in Sparrow to run different pipelines through the same Sparrow interface. Each pipeline can be implemented with different tech (LlamaIndex, Haystack, etc.) and run independently. The main advantage is that you can test various RAG functionalities from a single app with a unified API and choose the one that works best in the specific use case. 

 

LLM Structured Output with Local Haystack RAG and Ollama

Mon, 2024-01-29 13:27
Haystack 2.0 provides functionality to process LLM output and ensure proper JSON structure, based on predefined Pydantic class. I show how you can run this on your local machine, with Ollama. This is possible thanks to OllamaGenerator class available from Haystack. 

 

JSON Output with Notus Local LLM [LlamaIndex, Ollama, Weaviate]

Tue, 2024-01-23 02:16
In this video, I show how to get JSON output from Notus LLM running locally with Ollama. JSON output is generated with LlamaIndex using the dynamic Pydantic class approach. 

 

FastAPI and LlamaIndex RAG: Creating Efficient APIs

Mon, 2024-01-15 03:21
FastAPI works great with LlamaIndex RAG. In this video, I show how to build a POST endpoint to execute inference requests for LlamaIndex. RAG implementation is done as part of Sparrow data extraction solution. I show how FastAPI can handle multiple concurrent requests to initiate RAG pipeline. I'm using Ollama to execute LLM calls as part of the pipeline. Ollama processes requests sequentially. It means Ollama will process API requests in the queue order. Hopefully, in the future, Ollama will support concurrent requests. 

 

Transforming Invoice Data into JSON: Local LLM with LlamaIndex & Pydantic

Mon, 2024-01-08 02:49
This is Sparrow, our open-source solution for document processing with local LLMs. I'm running local Starling LLM with Ollama. I explain how to get structured JSON output with LlamaIndex and dynamic Pydantic class. This helps to implement the use case of data extraction from invoice documents. The solution runs on the local machine, thanks to Ollama. I'm using a MacBook Air M1 with 8GB RAM. 

 

From Text to Vectors: Leveraging Weaviate for local RAG Implementation with LlamaIndex

Sun, 2023-12-17 07:59
Weaviate provides vector storage and plays an important part in RAG implementation. I'm using local embeddings from the Sentence Transformers library to create vectors for text-based PDF invoices and store them in Weaviate. I explain how integration is done with LlamaIndex to manage data ingest and LLM inference pipeline. 

 

Enhancing RAG: LlamaIndex and Ollama for On-Premise Data Extraction

Mon, 2023-12-11 06:54
LlamaIndex is an excellent choice for RAG implementation. It provides a perfect API to work with different data sources and extract data. LlamaIndex provides API for Ollama integration. This means we can easily use LlamaIndex with on-premise LLMs through Ollama. I explain a sample app where LlamaIndex works with Ollama to extract data from PDF invoices. 

 

Secure and Private: On-Premise Invoice Processing with LangChain and Ollama RAG

Tue, 2023-12-05 03:41
The Ollama desktop tool helps run LLMs locally on your machine. This tutorial explains how I implemented a pipeline with LangChain and Ollama for on-premise invoice processing. Running LLM on-premise provides many advantages in terms of security and privacy. Ollama works similarly to Docker; you can think of it as Docker for LLMs. You can pull and run multiple LLMs. This allows to switch between LLMs without changing RAG pipeline. 

 

Easy-to-Follow RAG Pipeline Tutorial: Invoice Processing with ChromaDB & LangChain

Mon, 2023-11-27 07:11
I explain the implementation of the pipeline to process invoice data from PDF documents. The data is loaded into Chroma DB's vector store. Through LangChain API, the data from the vector store is ready to be consumed by LLM as part of the RAG infrastructure. 

 

Vector Database Impact on RAG Efficiency: A Simple Overview

Sun, 2023-11-19 08:54
I explain the importance of Vector DB for RAG implementation. I show with a simple example, how data retrieval from Vector DB could affect LLM performance. Before data is sent to LLM, you should verify if quality data is fetched from Vector DB. 

 

JSON Output from Mistral 7B LLM [LangChain, Ctransformers]

Mon, 2023-11-13 14:00
I explain how to compose a prompt for Mistral 7B LLM model running with LangChain and Ctransformers to retrieve output as JSON string, without any additional text. 

 

Structured JSON Output from LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]

Mon, 2023-11-06 08:00
I explain how to get structured JSON output from LLM RAG running using Haystack API on top of Llama.cpp. Vector embeddings are stored in Weaviate database, the same as in my previous video. When extracting data, a structured JSON response is preferred because we are not interested in additional descriptions.

 

Invoice Data Processing with Llama2 13B LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]

Sun, 2023-10-22 13:54
I explained how to set up local LLM RAG to process invoice data with Llama2 13B. Based on my experiments, Llama2 13B works better with tabular data compared to Mistral 7B model. This example presents a production LLM RAG setup with Weaviate database for vector embeddings, Haystack for LLM API, and Llama.cpp to run Llama2 13b on a local CPU. 

 

Invoice Data Processing with Mistral LLM on Local CPU

Mon, 2023-10-16 14:19
I explain the solution to extract invoice document fields with open-source LLM Mistral. It runs on CPU and doesn't require Cloud machine. I'm using Mistral 7B LLM model, Langchain, Ctransformers and Faiss vector store to run it on a local CPU machine. This approach gives a great advantage for enterprise systems, when running ML models on Cloud is not allowed for privacy reasons. 

 

Skipper MLOps Debugging and Development on Your Local Machine

Mon, 2023-10-09 03:03
I explain how to stop some of the Skipper MLOps services running in Docker and debug/develop these services code locally. This improves development workflow. There is no need to deploy code change to Docker container, it can be tested locally. Service that runs locally, connects to the Skipper infra through RabbitMQ queue.

 

Pages