Show HN: Auto-optimizing deterministic LLM outputs using knowledge graphs https://ift.tt/3GDoWdH
Show HN: Auto-optimizing deterministic LLM outputs using knowledge graphs Hi, We are building an open-source framework for loading and structuring LLM context to create accurate and explainable LLM answers using knowledge graphs and vector stores. We built the tool with four main concepts in mind: 1. Loader -> uses dlt in the backend to load and structure the data 2. Cognify step -> creates a graph with summaries, labels and factoids that are interconnected across the documents and stored as a representation in the vector store 3. Optimizer -> Uses DSPy to optimize LLM queries, and we plan to extend it to most of the knobs we can turn, like chunking etc. 4. Search -> allows for searching using search types supported in graph stores (ex. Neo4j) or hybrid, BM25, or other search types available in vector stores. We are quite early with the product but we would love to hear feedback on what we can improve. https://ift.tt/cAioQH7 April 22, 2024 at 11:55PM
Comments
Post a Comment
Thanks you :)
if you like it share please