A comprehensive video proctoring system designed for online interviews with advanced computer vision capabilities for focus detection and unauthorized object identification.
๐ Try the Live App
- Real-time Video Monitoring: Live video feed with candidate monitoring
- Focus Detection: Tracks if candidate is looking at screen vs. looking away
- Face Detection: Monitors presence of face and detects multiple faces
- Object Detection: Identifies unauthorized items (phones, books, notes, devices)
- Event Logging: Comprehensive logging with timestamps and durations
- Integrity Scoring: Automated scoring based on violations and events
- Real-time Alerts: Live notifications for suspicious activities
- Professional Reporting: PDF and CSV report generation
- Session Management: Complete interview session workflow
- Responsive Design: Optimized for desktop interview setups
- Modern UI/UX: Clean, professional interface
- Node.js 16+ and npm
- Modern web browser with camera access
- HTTPS connection (required for camera access)
-
Clone the repository
git clone <repository-url> cd video-proctoring-system
-
Install dependencies
npm install
-
Start development server
npm run dev
-
Access the application
- Open https://localhost:5173 (HTTPS required for camera)
- Allow camera permissions when prompted
-
Setup Phase
- Enter candidate's full name
- Click "Start Interview" to begin session
-
Monitoring Phase
- System automatically starts video monitoring
- Real-time detection of focus and objects
- Live alerts for violations and suspicious activities
- Monitor dashboard shows current status and recent events
-
Completion Phase
- Click "End Session" when interview is complete
- System generates comprehensive proctoring report
- Download reports in PDF or CSV format
- Focus Loss: Triggers when candidate not looking at screen >5 seconds
- No Face: Alerts when no face detected >10 seconds
- Multiple Faces: Detects when multiple people are present
- Mobile Phones: Identifies smartphones in video frame
- Books/Notes: Detects paper materials and notebooks
- Electronic Devices: Recognizes laptops, tablets, other devices
- Confidence Scoring: Only reports detections above 70% confidence
- Base Score: 100 points
- Deduction Rules:
- Phone detected: -15 points
- Books/notes detected: -10 points
- Electronic devices: -10 points
- Multiple faces: -8 points
- Focus lost: -5 points
- No face detected: -5 points
- Candidate information and session details
- Duration and timeline of interview
- Event summary with violation counts
- Detailed event log with timestamps
- Final integrity score and assessment
Focus & Object Detection in Video Interviews/
โโโ public/
โ โโโ favicon.svg
โโโ src/
โ โโโ components/
โ โ โโโ MonitoringDashboard.tsx
โ โ โโโ ProctoringReport.tsx
โ โ โโโ SessionSetup.tsx
โ โ โโโ VideoFeed.tsx
โ โโโ hooks/
โ โ โโโ useAudioDetection.ts
โ โ โโโ useFaceDetection.ts
โ โ โโโ useObjectDetection.ts
โ โ โโโ useVideoStream.ts
โ โโโ types/
โ โ โโโ proctoring.ts
โ โโโ utils/
โ โ โโโ reportGenerator.ts
โ โโโ App.tsx
โ โโโ supabaseConfig.ts
โ โโโ ...
โโโ .env
โโโ index.html
โโโ package.json
- TensorFlow.js: Machine learning inference in browser
- COCO-SSD: Pre-trained object detection model
- MediaPipe: Face detection and landmark tracking
- WebRTC: Real-time video streaming
- Focus Tracking: Analyzes face position and eye direction
- Object Recognition: Identifies prohibited items with confidence scoring
- Event Processing: Intelligent filtering and threshold-based alerts
# Build optimized production bundle
npm run build
# Preview production build
npm run preview
# Deploy to hosting platform
npm run deploy- Efficient Processing: Optimized detection loops with requestAnimationFrame
- Model Loading: Lazy loading of ML models
- Memory Management: Proper cleanup of video streams and detection loops
- Responsive Design: Optimized for various screen sizes
- WebRTC Support: Modern browsers (Chrome 60+, Firefox 55+, Safari 11+)
- Hardware Acceleration: GPU acceleration recommended for smooth performance
- Camera Access: Requires HTTPS and user permission grants
- Local Processing: All detection happens client-side
- No Video Storage: Video streams are processed in real-time only
- Event Logging: Only detection events and metadata are stored
- User Consent: Explicit camera permission required
- Audit Trail: Complete event logging with timestamps
- Transparency: Clear indication of monitoring status
- Data Export: Full report export capabilities
- Eye Closure Detection: Monitors for drowsiness (future enhancement)
- Audio Analysis: Background voice detection (future enhancement)
- Behavior Analytics: Advanced behavioral pattern recognition
- API Ready: Structured data format for integration
- Webhook Support: Real-time event notifications (configurable)
- Report Automation: Scheduled and automated report generation
- Machine Learning: Custom model training for specific use cases
- Multi-language: Internationalization support
- Advanced Analytics: Behavior pattern analysis
- Cloud Integration: Optional cloud storage and processing
- Mobile Support: Responsive design for tablet/mobile interviews
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
For support and questions:
- Create an issue in the repository
- Check the documentation and FAQ
- Review the code comments for implementation details
Built with โค๏ธ for secure and reliable online interview proctoring