Services Products Clients Blog Contact
Back to blog
April 2026 Engineering

How We Built a Mobile Image Editor in 4 Hours (And Unblocked an Entire App)

Sometimes the hardest part of building software isn't the code. It's the waiting. A story about ditching vendor evaluations, beating writer's block of code with two AI chat windows, and shipping in an afternoon.

Sometimes the hardest part of building software isn't the code. It's the waiting.

The vendor trap

Our customer needed an image editor inside their mobile app. Sounds simple enough, right? Crop, rotate, add some filters, maybe slap on some text. The kind of thing you'd think has a dozen off-the-shelf solutions.

And technically, it does. But here's what nobody tells you about finding vendors for niche mobile components:

  • Pricing is a black box. Half the vendors want you to "book a demo" before they'll even whisper a number. I just want to crop photos, not negotiate a peace treaty.
  • Licensing gets weird fast. Per-seat? Per-device? Per-monthly-active-user? Some of these models feel like they were designed by a bored lawyer on a Friday afternoon.
  • Integration is never "plug and play." Every SDK promises 10 minutes to integrate. Every SDK lies. You're three days in, debugging some callback that fires twice on Android 13 but never on iOS 17.
  • Customization hits a wall. You need the crop tool to behave slightly differently? That'll be a feature request, reviewed in Q3, maybe shipped in Q4. Maybe.

We spent more time evaluating vendors than it would've taken to just… build the thing. Which is exactly what we ended up doing.

The writer's block of code

But there's this other problem nobody talks about enough — the blank file problem. You know the feeling. You open your IDE, you create ImageEditor.tsx, and you just… stare at it.

Where do you even start? Canvas APIs? Touch gesture handlers? Matrix transformations for pinch-to-zoom? The mountain of boilerplate between "empty file" and "something that actually renders on screen" is genuinely paralyzing. It's writer's block, but for code.

This is the real bottleneck with bootstrapping UI-heavy features. It's not that any single piece is impossible. It's that the first 80% of wiring — the scaffold, the state management, the gesture plumbing — is tedious enough to kill your momentum before you even get to the interesting parts.

The copy-paste sprint

So here's what actually happened. It was a Thursday afternoon. I opened ChatGPT in one tab, Claude in another (this was pre-Claude Code days, pure chat-and-copy-paste vibes), and I just started asking.

"Give me a React Native component that renders an image on a canvas with pinch-to-zoom."

Paste. Run. Half-broken. Ask again with the error message.

"Now add a crop overlay with draggable corners."

Paste. Run. Corners are there but inverted. Laugh. Fix. Move on.

"Add rotation with a slider."

Paste. Tweak. It works. Feel like a wizard.

The workflow was hilariously low-tech. No fancy tooling, no agentic coding pipelines, no automated test suites. Just me, two chat windows, and a lot of Cmd+C / Cmd+V. I'd bounce between the two models — sometimes Claude nailed the architecture better, sometimes ChatGPT gave me a cleaner snippet for a specific gesture handler. They complemented each other in ways I didn't expect.

Four hours later, we had a working image editor. Crop, rotate, brightness, contrast, text overlay. Was it beautiful code? Absolutely not. Was it production-ready? With a weekend of polish, yes. Did it unblock the entire app release that had been stuck in vendor-evaluation limbo for two weeks? You bet.

Source code

The result of that afternoon lives on GitHub: github.com/santacroce-tech/ImageEditor. Take it, fork it, learn from it — or just confirm for yourself that scrappy code can ship.

What I actually learned

Vendors are great — until they're not. For core, complex, long-lived features, a solid third-party SDK is worth its weight in gold. But for something that just needs to work and ship? The evaluation process itself can become the bottleneck.

AI didn't write the app for me. I want to be honest about this. I still had to understand what I was asking for, debug the output, make architectural decisions, and stitch everything together. The AI handled the boilerplate — the exact stuff that causes code writer's block. It got me from "blank file" to "working skeleton" fast enough that my brain could focus on the interesting decisions.

The copy-paste era was underrated. Everyone's excited about agentic coding now (and rightfully so — it's incredible). But there was something beautifully simple about the chat-and-paste workflow. Low overhead, full control, instant feedback. You don't always need the fanciest hammer.

Four hours is a real number. I'm not exaggerating for the blog. I started after lunch and had a working demo before my kid's soccer practice. Sometimes the scrappy path is the fast path.

The takeaway

If you're stuck evaluating vendors for a feature that's blocking your release — take a Thursday afternoon, open a couple of AI chat windows, and just start building. You might surprise yourself.

The worst that happens? You waste four hours and learn a lot about canvas APIs. The best that happens? You ship.

We shipped.