Show HN: Continue (YC S23) – Open-source coding autopilot https://ift.tt/KoTEfnz

Show HN: Continue (YC S23) – Open-source coding autopilot Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension. Our GitHub is https://ift.tt/tO9Rql6 . You can watch a demo of Continue and download the extension at https://continue.dev — — — A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow. However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it. The solution is to give developers what they need: transparency, hackability, and control . Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue. — — — At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor. But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search. And there’s a ton of room for further customization. Today, you can write your own - slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description) - context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages) - templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: ”) - tools (e.g. add a file, run unit tests, build and watch for errors) - policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop) Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up ( https://ift.tt/3caBRvi... ). When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow). You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: https://ift.tt/stAy08d... Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style. — — — We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : ) https://ift.tt/tO9Rql6 July 26, 2023 at 11:34PM

Comments