Show HN: Turn a video of an app into a functional prototype with Claude Opus https://ift.tt/zRXbnQ1
Show HN: Turn a video of an app into a functional prototype with Claude Opus Hey everyone, I’m the maintainer of the popular screenshot-to-code repo on Github (46k+ stars). When Claude Opus was released, I thought to myself what if you could send in a video of yourself using a website or app, would the LLM be able to build it as a functional prototype? To my surprise, it worked quite well. Here are two examples: * In this video, you can see the AI replicating Google with auto-complete suggestions and a search results page (failed at putting the results on a separate page). https://ift.tt/mVgWluA * Here, we show it a multi-step form ( https://ift.tt/HYtZ8eT ) and ask Claude to re-create it. It does a really good job! https://ift.tt/Js5rNOA The technical details: Claude Opus only allows you to send a max of 20 images so 20 frames are extracted from the video, and passed along with a prompt that uses a lot of Claude-specific techniques such as using XML tags and pre-filling an assistant response. In total, 2 passes are performed with the second pass instructing the AI improve on the first attempt. More passes might help as well. While I think the model has Google.com memorized but for many other multi-page/screen apps, it tends to work quite well. You can try it out by downloading the Github repo and setting up a Anthropic API key in backend/.env Be warned that one creation/iteration (with 2 passes) can be quite expensive ($3-6 dollars). https://ift.tt/ZV3n6C9 March 22, 2024 at 12:36AM
Comments
Post a Comment
Thanks you :)
if you like it share please