Yu: Your Virtual Clone Assistant is a React Native mobile application that serves as a personal AI assistant with capabilities in vision, voice, translation, and device control.
- Node.js 18+ installed
- npm or yarn package manager
- Expo CLI installed globally:
npm install -g expo-cli - Expo Go app installed on iOS/Android device (for testing)
- TypeScript knowledge (recommended)
-
Clone the repository
git clone <repository-url> cd Yu
-
Install dependencies
npm install
-
Start the development server
npm start
-
Run on device
- Scan QR code with Expo Go app (iOS/Android)
- Or press
ifor iOS simulator,afor Android emulator
# Start development server
npm start
# Start with cache cleared
npm start:clean
# Run on Android
npm run android
# Run on iOS
npm run ios
# Run on Web
npm run web
# Run tests
npm test
# Lint code
npm run lintYu/
βββ src/
β βββ screens/ # Screen components
β β βββ HomeScreen.tsx
β β βββ ProfileScreen.tsx
β β βββ ProfileSetupScreen.tsx
β β βββ ChatScreen.tsx
β β βββ TranslateScreen.tsx
β β βββ YuVisionScreen.tsx
β βββ components/ # Reusable components
β β βββ YuOrb.tsx
β β βββ AudioVisualization.tsx
β βββ theme/ # Theme configuration
β β βββ colors.ts
β β βββ typography.ts
β β βββ index.ts
β βββ utils/ # Utility functions
β βββ speech.ts
βββ assets/ # Images, fonts, etc.
βββ docs/ # Documentation
βββ App.tsx # Main app component
βββ index.js # Entry point
βββ package.json # Dependencies
βββ tsconfig.json # TypeScript config
βββ babel.config.js # Babel config
βββ metro.config.js # Metro bundler config
The Home Screen is your main interaction hub.
- Yu Orb: Tap to start listening, long press for profile
- Quick Actions: Access Yu-Vision, Yu-Voice, Yu-Translate, Yu-Control
- Listening Mode: Tap the orb to activate voice input
- Responses: Yu responds with voice and text
- Tap the Yu Orb to start listening
- Wait for the 3-second listening period
- Yu will respond with voice and text
- Use quick action cards to navigate to specific features
- Name Setup: Set your name for personalization
- Personality Selection: Choose how Yu communicates
- Assistant: Formal and helpful
- Friend: Casual and empathetic
- Expert: Technical and efficient
- Minimalist: Quiet and unobtrusive
- Presence Level: Control Yu's activity level
- Full Yu: Complete interaction
- Quiet Yu: Notifications only
- Shadow Yu: Passive learning
- Off: Complete privacy
- Settings: Access notifications, privacy, voice, and help

- Long press the Yu Orb from Home Screen
- Set your name in the input field
- Select your preferred personality
- Choose your presence level
- Access settings as needed
- Text Messaging: Type messages to Yu
- Voice Input: Use microphone button for speech-to-text
- Recording: Visual feedback during recording
- AI Responses: Yu responds with contextual answers
- Camera Access: Navigate to Yu-Vision from header
- Navigate from Home Screen β Yu-Control
- Type a message or tap microphone to record
- Recording card appears with visualization
- Stop recording or wait for auto-stop
- Text appears in input field
- Send message to get Yu's response
Translate text and speech between languages.

- Language Selection: Choose source and target languages
- Text Translation: Type or paste text to translate
- Voice Translation: Record audio for speech-to-text translation
- Voice Playback: Hear translated text
- Copy to Clipboard: Copy translated text
- Quick Phrases: One-tap common phrase translation
- Navigate from Home Screen β Yu-Translate
- Select source and target languages
- Enter text or record audio
- Tap Translate button
- View translated text
- Use sound button to hear translation
- Use copy button to copy to clipboard
Use camera for visual intelligence.

- Camera View: Real-time camera preview
- Capture: Take photos for analysis
- Flashlight: Toggle camera flash
- Analysis: Yu provides voice analysis of captured images
- Navigate from Home Screen β Yu-Vision
- Point camera at object or scene
- Tap capture button
- Listen to Yu's analysis
- Use flashlight button for low-light conditions
- Language: TypeScript
- Formatting: Follow ESLint rules
- Naming: PascalCase for components, camelCase for functions
- File Structure: One component per file
- Create screen component in
src/screens/ - Add to navigation in
App.tsx - Update navigation types if needed
- Add screen to documentation
- Create component in
src/components/ - Export from component file
- Use TypeScript interfaces for props
- Follow existing component patterns
Edit src/theme/colors.ts and src/theme/typography.ts to customize:
- Color palette
- Typography styles
- Component themes
Currently, testing is manual. Future enhancements will include:
- Unit tests (Jest)
- Component tests (React Native Testing Library)
- E2E tests (Detox)
Edit app.json for:
- App name and version
- Icons and splash screens
- Permissions
- Platform-specific settings
Edit metro.config.js for:
- Asset resolution
- Module resolution
- Cache configuration
Edit tsconfig.json for:
- Compiler options
- Path aliases
- Type definitions
Currently, no environment variables are used. Future backend integration will require:
- API endpoints
- API keys
- Feature flags
Problem: Build errors or cache issues
Solution:
npm start:clean
# Or
npx expo start --clearProblem: Missing dependencies
Solution:
npm install
# Or
npx expo install --fixProblem: Type errors
Solution:
- Check
tsconfig.jsonconfiguration - Ensure all types are imported correctly
- Run
npm run lintto identify issues
Problem: Camera not working
Solution:
- Check device permissions
- Ensure
expo-camerais properly installed - Check
app.jsonpermissions
Problem: Text-to-speech not playing
Solution:
- Check device volume
- Ensure
expo-speechis installed - Check for errors in console
npm start
# Press 'j' to open debugger- React Native Debugger
- Expo DevTools
- Device console logs
- Use React DevTools Profiler
- Monitor bundle size
- Check for memory leaks
- Requires iOS 13+
- Camera permissions required
- Speech synthesis available
- Requires Android 6.0+
- Camera permissions required
- Speech synthesis available
- Limited camera support
- Speech synthesis via Web Speech API
- Some features may not work
- No user authentication
- No data persistence
- No network requests
- All data is local
- Secure API communication
- Encrypted data storage
- User authentication
- Privacy controls
- Data encryption
- Initial load: < 3 seconds
- Screen transitions: Smooth
- Animations: 60 FPS
- Memory usage: Optimized
- Use React.memo for expensive components
- Lazy load screens if needed
- Optimize images
- Minimize re-renders
- Check Issues Documentation for known issues
- Report bugs via GitHub Issues
- Check Expo documentation for platform-specific issues
Last Updated: December 2025
Version: 1.0.0



