61684b1071
* Initial implementation; Happy flow * Maybe revert this * Functional prototype * feat(facial-expressions): get stream when changing background effect and use presenter effect with camera * add(facial-expressions): array that stores the expressions durin the meeting * refactor(facial-expressions): capture imagebitmap from stream with imagecapture api * add(speaker-stats): expression label * fix(facial-expression): expression store * revert: expression leabel on speaker stats * add(facial-expressions): broadcast of expression when it changes * feat: facial expression handling on prosody * fix(facial-expressions): get the right track when opening and closing camera * add(speaker-stats): facial expression column * fix(facial-expressions): allow to start facial recognition only after joining conference * fix(mod_speakerstats_component): storing last emotion in speaker stats component and sending it * chore(facial-expressions): change detection from 2000ms to 1000ms * add(facial-expressions): send expression to server when there is only one participant * feat(facial-expressions): store expresions as a timeline * feat(mod_speakerstats_component): store facial expresions as a timeline * fix(facial-expressions): stop facial recognition only when muting video track * fix(facial-expressions): presenter mode get right track to detect face * add: polyfils for image capture for firefox and safari * refactor(facial-expressions): store expressions by counting them in a map * chore(facial-expressions): remove manually assigning the backend for tenserflowjs * feat(facial-expressions): move face-api from main thread to web worker * fix(facial-expressions): make feature work on firefox and safari * feat(facial-expressions): camera time tracker * feat(facial-expressions): camera time tracker in prosody * add(facial-expressions): expressions time as TimeElapsed object in speaker stats * fix(facial-expresions): lower the frequency of detection when tf uses cpu backend * add(facial-expressions): duration to the expression and send it with durantion when it is done * fix(facial-expressions): prosody speaker stats covert fro string to number and bool values set by xmpp * refactor(facial-expressions): change expressions labels from text to emoji * refactor(facial-expressions): remove camera time tracker * add(facial-expressions): detection time interval * chore(facial-expressions): add docs and minor refactor of the code * refactor(facial-expressions): put timeout in worker and remove set interval in main thread * feat(facial-expressions): disable feature in the config * add(facial-expressions): tooltips of labels in speaker stats * refactor(facial-expressions): send facial expressions function and remove some unused functions and console logs * refactor(facial-expressions): rename action type when a change is done to the track by the virtual backgrounds to be used in facial expressions middleware * chore(facial-expressions): order imports and format some code * fix(facial-expressions): rebase issues with newer master * fix(facial-expressions): package-lock.json * fix(facial-expression): add commented default value of disableFacialRecognition flag and short description * fix(facial-expressions): change disableFacialRecognition to enableFacialRecognition flag in config * fix: resources load-test package-lock.json * fix(facial-expressions): set and get facial expressions only if facial recognition enabled * add: facial recognition resources folder in .eslintignore * chore: package-lock update * fix: package-lock.json * fix(facial-expressions): gpu memory leak in the web worker * fix(facial-expressions): set cpu time interval for detection to 6000ms * chore(speaker-stats): fix indentation * chore(facial-expressions): remove empty lines between comments and type declarations * fix(facial-expressions): remove camera timetracker * fix(facial-expressions): remove facialRecognitionAllowed flag * fix(facial-expressions): remove sending interval time to worker * refactor(facial-expression): middleware * fix(facial-expression): end tensor scope after setting backend * fix(facial-expressions): sending info back to worker only on facial expression message * fix: lint errors * refactor(facial-expressions): bundle web worker using webpack * fix: deploy-facial-expressions command in makefile * chore: fix load test package-lock.json and package.json * chore: sync package-lock.json Co-authored-by: Mihai-Andrei Uscat <mihai.uscat@8x8.com> |
||
---|---|---|
.github | ||
android | ||
connection_optimization | ||
css | ||
debian | ||
doc | ||
flow-typed/npm | ||
fonts | ||
images | ||
ios | ||
lang | ||
modules | ||
patches | ||
react | ||
resources | ||
service/UI | ||
sounds | ||
static | ||
twa | ||
.buckconfig | ||
.editorconfig | ||
.eslintignore | ||
.eslintrc.js | ||
.flowconfig | ||
.gitattributes | ||
.gitignore | ||
.npmrc | ||
.nvmrc | ||
.travis.yml | ||
.watchmanconfig | ||
CONTRIBUTING.md | ||
LICENSE | ||
Makefile | ||
README.md | ||
SECURITY.md | ||
analytics-ga.js | ||
app.js | ||
babel.config.js | ||
base.html | ||
body.html | ||
conference.js | ||
config.js | ||
connection.js | ||
eslint | ||
favicon.ico | ||
head.html | ||
index.android.js | ||
index.html | ||
index.ios.js | ||
interface_config.js | ||
logging_config.js | ||
manifest.json | ||
metro.config.js | ||
package-lock.json | ||
package.json | ||
plugin.head.html | ||
pwa-worker.js | ||
readme-img1.png | ||
title.html | ||
webpack.config.js |
README.md
Jitsi Meet
Jitsi Meet is a set of Open Source projects which empower users to use and deploy video conferencing platforms with state-of-the-art video quality and features.
Amongst others here are the main features Jitsi Meet offers:
- Support for all current browsers
- Mobile applications
- Web and native SDKs for integration
- HD audio and video
- Content sharing
- End-to-End Encryption
- Raise hand and reactions
- Chat with private conversations
- Polls
- Virtual backgrounds
And many more!
Using Jitsi Meet
Using Jitsi Meet is straightforward, as it's browser based. Head over to meet.jit.si and give it a try. It's anonymous, scalable and free to use. All browsers are supported!
Using mobile? No problem, you can either use your mobile web browser or our fully-featured mobile apps:
Android | Android (F-Droid) | iOS |
---|---|---|
If you are feeling adventurous and want to get an early scoop of the features as they are being developed you can also sign up for our open beta testing here:
Running your own instance
If you'd like to run your own Jitsi Meet installation head over to the handbook to get started.
We provide Debian packages and a comprehensive Docker setup to make deployments as simple as possible. Advanced users also have the possibility of building all the components from source.
You can check the latest releases here.
Jitsi as a Service
If you like the branding capabilities of running your own instance but you'd like to avoid dealing with the complexity of monitoring, scaling and updates, JaaS might be for you.
8x8 Jitsi as a Service (JaaS) is an enterprise-ready video meeting platform that allows developers, organizations and businesses to easily build and deploy video solutions. With Jitsi as a Service we now give you all the power of Jitsi running on our global platform so you can focus on building secure and branded video experiences.
Documentation
All the Jitsi Meet documentation is available in the handbook.
Security
For a comprehensive description of all Jitsi Meet's security aspects, please check this link.
For a detailed description of Jitsi Meet's End-to-End Encryption (E2EE) implementation, please check this link.
For information on reporting security vulnerabilities in Jitsi Meet, see SECURITY.md.
Contributing
If you are looking to contribute to Jitsi Meet, first of all, thank you! Please see our guidelines for contributing.
Built with ❤️ by the Jitsi team at 8x8 and our community.