Interactive Document Understanding and Querying using LLMs
- What its purpose is:
- Implemented a system that performs Retrieval Augmented Generation (RAG) with LLM to create answers based on the user’s data.
- Enabled users to effectively query their own data repositories, retrieving accurate and context-based information.
- Modeled the user data and query in the same vector space, performed semantic search to retrieve potential answers for that query.
- Generated contextually relevant responses using the LLM model, delivered information and precise insights tailored to user queries.
- Technologies Used:
- Python, Open AI API, Langchain
- GitHub: Project Link