Since the first image editing models were made available, we integrated image editing as an important feature of Chat-O. We’ve always kept the feature up to date with the latest models, but today we’re releasing our biggest improvement yet.
Over the past few weeks, we did a full refactor of the parts of our system that handle image generation and editing. The goal was to make sequential editing work better.
Getting an image right usually takes a few tries. You generate something, then realize you want different lighting. Or maybe a different background. Maybe both. Our previous implementation would sometimes break when you tried to edit an image multiple times in a row. You’d make one edit successfully, then the second or third would fail or give unexpected results.
We expect this refactor to fix most of these issues.
Complete rebuild of editing logic
We refactored how the system handles sequential edits from the ground up. The old implementation was passing too much context to the AI models: your entire conversation history, failed attempts, unrelated messages. This would confuse the models and cause errors on the second or third edit.
The new system only sends exactly what’s needed: the most recent image and your current edit request. This should make sequential editing more reliable.
Smarter error recovery
We rebuilt the error handling system. When an edit failed in the past, it would sometimes leave behind corrupted data that would mess up your next request. The refactored system includes automatic cleanup that filters out these artifacts.
State-of-the-art models for both generation and editing
For image generation, we’re now using state-of-the-art models including Imagen 4 Fast (Google’s latest), Ideogram V3 Turbo, and Flux Schnell. The system picks one for each generation request.
For image editing, we’re using the latest available models that are designed to handle sequential edits and multimodal tasks.
These models are faster than what we had before. Most image generations finish in under 10 seconds.
For both generation and editing, we use the latest AI models available. We pick them based on three things:
We swap in newer models as soon as they meet these standards, so you’re always working with current technology.
Image editing is one feature, but our bigger goal is to build the best place to work with AI overall. That means:
We want Chat-O to be where you go when you need to get something done with AI, whether that’s writing, coding, generating images, or all of the above.
If you want to test the upgraded image editing:
The refactored system should handle multiple edits in a row much better than before.
Want to try it? Sign up here and you’ll get 1,000 free credits. That’s enough to generate a bunch of images and test out the editing.
Have feedback or issues? Email us at [email protected]. We actually read and respond to everything.