I’d like to share an early update on the micro-project my team and I are working on. It’s Blenny, an AI vision co-pilot for the web. The goal with Blenny is to make AI just a little more accessible and useful for everyday tasks.
With both the vision (GPT-4V) and LLM (GPT-4) capabilities powered by OpenAI, Blenny aims to be a helpful companion that can provide context and insights based on what’s visible on your screen.
It’s available as a browser extension for Chrome. There’s no paywall, but please bring your API Key to use Blenny. Also note that it’s still a WIP.
Getting started.
- Install Blenny, and add your API Key.
- Press Crtl + B or CMD + B to select the screen area.
- Apply custom prompts or chat with the context.
At its core, we want this little tool to cater to whatever use case you throw at it. We will give Blenny web access and add support for better prompt libraries (GPTs) and custom agents down the road.