A Knowledge Base That Answers: Building a RAG Assistant with Low-Code

We spend a huge amount of time not on designing, but on “knowledge archaeology” — searching for relevant requirements and answers to colleagues' questions. Attempts to introduce AI often fail because of data quality: neural networks hallucinate without understanding our tables, schemas, and specific specification formats.

How can we turn scattered documentation into a smart assistant that answers questions about business processes and integrations?

The talk is devoted to the technical implementation of RAG (Retrieval-Augmented Generation) using modern tools. We will explore why proper parsing (LlamaParse) is crucial for complex specifications with tables, how to organise knowledge storage in a vector database (Pinecone), and link it all into a single process without deep programming using n8n.

The talk will include real-world experience, architectural schemes, and error analysis during data preparation.

Comments ({{Comments.length}})
  • {{comment.AuthorFullName}}
    {{comment.AuthorInfo}}
    {{ comment.DateCreated | date: 'dd.MM.yyyy' }}

To leave a feedback you need to

or
Chat with us, we are online!