Philosophy

Four commitments we intend to keep.

These are the promises the site makes on behalf of the product. They're the principles we want the product's design, its engineering, and our decisions to all be measured against.

I.

Compose your stack.

No single AI service is best at every medium — text, voice, image, video — or does any of them the way any given user wants. The strategic move we're making is letting you pick the best provider for each and layering them into one experience. Your models, your voice, your vision, in the same room.

The mechanism underneath is Bring-Your-Own-Key: you hold the credentials for each service you connect, you pay those providers directly for your usage, and Banashi performs no inference of its own. Composition is the outcome; BYOK is how we get there without putting ourselves between you and the services you chose.

The alternative — packaging inference behind an app's own API key — collapses two products into one: you'd be locked into whatever model the app happens to be running today, paying a markup, with no portability if the provider's pricing or terms change. We don't want to be that layer.

II.

Your conversations stay yours.

By default, your chats live in the app's local storage on your device. We do not have a server collecting what you write. If you want sync across your devices, you can opt in to iCloud sync, which is end-to-end encrypted — your content is encrypted on your device with keys only you hold, and Apple (and we) see only ciphertext in transit.

The only data Banashi itself ever receives about you is the minimum required to run the service — for most users, that is nothing. Your provider of course sees the content of your conversations, because they generated the response; that relationship is between you and them, not mediated by us.

III.

Moderation is structural, not cosmetic.

Banashi runs a content moderation layer on AI output before rendering it. There is a hard floor of content that the app never shows regardless of user settings. Above that floor, additional policy sits behind age verification. The system is designed to be honest about the existence of the moderation layer — both its presence and its limits — rather than hiding it.

We're not going to describe the internals in detail here; doing so would either be marketing (claims we can't verify) or an invitation to adversarial optimization. What we can say is that the moderation is real, runs on every response, and is part of the app by design, not something bolted on to satisfy a store listing.

IV.

We don't run the AI. You do.

This is the commitment version of what commitment I describes as an architecture. We will not pivot to reselling inference, routing your requests through our own provider account, or quietly swapping in a cheaper model behind the scenes. Whatever provider you connected for a given medium, Banashi talks to that provider, and the app is transparent about which service and which model are serving any given response — text, voice, image, or video.

The logical consequence is that Banashi's business cannot be based on margin on tokens. It will be based on being a good enough interface and character engine that people want to pay for the app itself. That's a harder business, and we think it's the right one.