An open-source, privacy-first voice assistant for mobile with real-time API integration. Think "Ollama for mobile + realtime voice."
Connects to your Google Drive, Github, Hacker News, and the web.
It's currently a thin wrapper around the OpenAI Realtime speech API, however the long term vision is to make it extensible and pluggable, with a fully open source stack.
If this sounds interesting, ⭐️ the project on GitHub to help it grow.
hackernews-arty-demo.mp4
Whats's in the demo
- "What are top storiees on hacker news?"
- "What are commetns about montana law story?"
- "SUmmarize new montana law"
Arty_Demo.mp4
or view the full resolution version
Voice chat (home screen)
|
Text chat
|
Configure connectors
|
Voice AI is now incredibly powerful when connected to your data, yet current solutions are closed source, compromise your privacy, and are headed toward ads and lock-in.
This project offers a fully open alternative: local execution, no data monetization, and complete control over where your data goes.
Security note: TestFlight builds are compiled binaries; do not assume they exactly match this source code. If you require verifiability, build from source and review the code before installing.
Getting Started Instructions
- Create a new OpenAI API key. Grant the minimum realtime permissions shown below: (Models read, Model capabilities write)
- Grant access to Responses API.
- Paste the key into the onboarding wizard and tap Next.
- Connect Google Drive so Arty can see your files. OAuth tokens stay on-device. See Security + Privacy for details.
- Choose the Google account you want to use.
- Tap “Hide Advanced” and then “Go to vibemachine (unsafe).”
- Review the OAuth scopes that Arty is requesting.
- Confirm the connection. You should see a success screen when Drive is linked.
- Optional: Provide your own Google Drive Client ID for extra control.
- Finish the onboarding wizard.
- Start chatting with Arty.
How to get the most out of it
- Connect your GitHub account: open the Hamburger Menu → Configure Connectors → GitHub and add a Personal Access Token. When creating the PAT, the recommended scopes are
gist,read:org, andrepo. - Personalize Arty: adjust the system prompt, voice, VAD mode, and tool configuration from the Advanced settings sheets to match your workflow.
- Try out text chat mode when you can't use voice. Under settings, configure it to use text chat mode. Note, there's no streaming token support yet, so it feels pretty slow.
- Supports several connectors: Google Drive, Github, Hacker News, and Web Search - Voice assistant that can summarize content in Google Drive, interact with Github, browse Hacker News, and search the web
- Extensible - Adding connectors is fairly easy. File an issue to request the connector you'd want to see.
- Customizable prompts - Edit system and tool prompts directly from the UI
- Multi-mode audio - Works with speaker, handset, or Bluetooth headphones
- Background noise handling - Mute yourself in loud environments
- Session recording - Optional conversation recording and sharing
- Voice and text modes - Switch between input methods seamlessly
- Observability - Optional Logfire integration for debugging (disabled by default)
- Privacy-first - No server except connected services—your data stays yours
- Cost - High OpenAI API costs due to poor context window management and fallback strategies
- Text Mode is limited - The Text mode does not support streaming tokens yet. It has a very basic and limited UX.
- Platform - iOS only, no Android support yet due to currently using native WebRTC library, despite using React Native via Expo.
- Performance - Codegen is slow and unreliable. Most functionality should be moved to static tools
- UX - No progress indicators during operations
- Security - Dynamic codegen poses risks. Mitigation: use read-only access for connected services
- Recording - Optional call recording implementation doesn't work very reliably since it regenerates the conversation based on a text transcript
Important note: Although tokens never leave the device, some user prompts and connector content are transmitted to the OpenAI Realtime API by design. If you require strict local-only execution, do not use this app. Watch for future updates that support fully self-contained usage or privately hosted models instead.
From a security perspective, the main risks are credential leakage or abuse:
- OpenAI API Key
- Google Drive Auth Token
- GitHub PAT
Mitigation: All credentials remain on-device, stored only in memory or secure storage (iOS Keychain). Audit the source code to verify that no credentials are transmitted externally.
Security + privacy: storage, scopes, and network flow recap
-
All token storage in memory and secure storage happens in
lib/secure-storage.ts -
The actual saving/retrieval of tokens is delegated to the Expo library
expo-secure-store -
Transport security: All outbound requests to OpenAI, Google, GitHub, and Logfire use HTTPS with TLS handled by each provider. This project does not introduce custom proxies or MITM layers.
-
Prompt-injection and mis-issuance: The app does not currently detect or prevent malicious model output from executing unexpected write actions. Use read-only scopes wherever possible.
-
OAuth tokens and API keys are stored via
expo-secure-store, which maps to the iOS Keychain using thekSecAttrAccessibleAfterFirstUnlockThisDeviceOnlyaccessibility level. Tokens are never written to plaintext disk. -
Recording is off by default, and conversation transcripts are not saved. Optional recordings remain on-device and rely on standard iOS filesystem encryption.
-
No third-party endpoints beyond OpenAI, Google, GitHub, and optional Logfire are contacted at runtime. The app does not embed analytics, crash reporting SDKs, or ad networks.
-
The Google Drive OAuth scope used by the default Client ID in the TestFlight build is read-only—it can create or edit files that the app created, but cannot edit or delete files that originated elsewhere. For tighter control, register your own Google Drive app, supply its Client ID, and grant the permissions you deem appropriate.
-
When creating a GitHub Personal Access Token, choose scopes based on your comfort level. Enable write scopes (for example, issue creation) explicitly—they are not required for basic usage.
-
Assume that connector operations which retrieve file contents may send that content to the LLM for summarization unless you have deliberately disabled that behavior.
Observability logs are disabled by default. Note that these should be automatically scrubbed of API tokens by Logfire itself. Only enable Logfire after you have audited the code and feel comfortable—this is mainly a developer feature and not recommended for casual usage or testing.
Out of scope: This project does not currently defend against (1) on-device compromise, (2) malicious LLM responses executing actions against connected services using delegated tokens, or (3) interception of API traffic by the model provider.
Installation steps
git clone https://github.com/vibemachine-labs/arty.git
cd arty
curl -fsSL https://bun.sh/install | bash
bun installWhen building from source, you will need to provide your own Google Drive Client ID. You can decide the permissions you want to give it, as well as whether you want to go through the verification process.
For testing, the following oauth scopes are suggested:
- See and download your google drive files (included by default)
- See, edit, create, and delete only the specific Google Drive files you use with this app
To run in the iOS simulator:
bunx expo run:iosTo run on a physical device:
bunx expo run:ios --deviceEditing Swift code in Xcode
To open the project in Xcode:
xed iosIn Xcode, the native swift code will be under Pods / Development Pods
Misc Dev Notes
For certain testing scenarios, disable the onboarding wizard by editing app/index.tsx and commenting out the useEffect block that evaluates onboarding status:
useEffect(() => {
let isActive = true;
const evaluateOnboardingStatus = async () => {
try {
const storedKey = await getApiKey();
const hasStoredKey = typeof storedKey === "string" && storedKey.trim().length > 0;
if (!isActive) {
return;
}
setOnboardingVisible(!hasStoredKey);
} catch (error) {
if (!isActive) {
return;
}
log.warn("Unable to determine onboarding status from secure storage", error);
setOnboardingVisible(true);
}
};
if (!apiKeyConfigVisible) {
void evaluateOnboardingStatus();
}
return () => {
isActive = false;
};
}, [apiKeyConfigVisible, onboardingCheckToken]);- Project bootstrapped with
bunx create-expo-app@latest . - Refresh dependencies after pulling new changes:
bunx expo install - Install new dependencies:
bunx expo install <package-name> - Allow LAN access once:
bunx expo start --lan
- Register device:
eas device:create - Scan the generated QR code on the device and install the provisioning profile via Settings.
- Configure build:
bunx eas build:configure - Build:
eas build --platform ios --profile dev_self_contained
If pods misbehave, rebuild from scratch:
bunx expo prebuild --clean --platform ios
bunx expo run:iosArchitecture overview
React Native WebRTC libraries did not reliably support speakerphone mode during prototyping. The native Swift implementation resolves this issue but adds complexity and delays Android support.
Dynamic code generation currently powers some connector operations (Google Drive, GitHub), enabling rapid prototyping. However, the Hacker News tool demonstrates the preferred approach: statically defined tools that don't rely on codegen.
Migration in progress: Google Drive and GitHub tools will be converted from the codegen approach to static tools, improving reliability and performance. Long-term, codegen will remain available as a fallback option for rapid prototyping of new connectors.
Not yet implemented since all tools are currently local. Future versions will add MCP server support via cloud or local tunnel connections.
GPT-4 web search serves as a temporary solution. The roadmap includes integrating a dedicated search API (e.g., Brave Search) using user-provided API tokens.
OpenAI is currently the only supported backend. Adding support for multiple providers and self-hosted backends is on the roadmap.
- Address limitations listed above
- Improve text mode support
- Investigate async voice processing to reduce cost
- Add support for alternative voice providers (Unmute.sh, Speaches.ai, self-hosted)
- Remote MCP integration
- TypeScript MCP plugin support
The app itself will remain completely open source, with no restrictions or limitations.
Business model TBD. Likely a managed backend service using either:
- Azure OpenAI realtime APIs
- Fully open-source stack — possibly Unmute.sh or Speaches.ai
- Spread the word - Star github.com/vibemachine-labs/arty, share with friends
- Try it - Run the app and file issues
- Give feedback - Fill out a quick questionnaire (10 questions, 2 mins) or schedule a 15-min user interview
- Contribute ideas - File issues with appropriate labels
- Create pull requests - For larger proposed changes, it's probably better to file an issue first
- Email/Twitter: Email or Twitter/X via my Github profile.
- Issues, Ideas: Submit bugs, feature requests, or connector suggestions on GitHub Issues.
- Discord: A server will be launched if there’s enough interest.
- Responsible disclosure: Report security-relevant issues privately via email using the address listed on my Github profile before any public disclosure.



