Now that screenshare is permitted when a user is in audio-only mode, do not create a new video track if it doesn't exist when audio-only mode is automatically disabled. New tracks should only be prompted by user action such a camera unmute or start screenshare. Fixes https://github.com/jitsi/jitsi-meet/issues/11460.
* fix(multi-stream) update selector to find ss track by videoType or mediaType
* ref(multi-stream) move fake ss creation logic and support video type changed
* refactor(multi-stream) decouple sending and receiving multiple screenshare streams
* fix(multi-stream) fix receiver constraints with signaling and without multi-stream
* fix(mutli-stream) ensure plan b original SS thumbnail displays avatar
* fix(multi-stream) show fake SS for plan b sender
* refactor(multi-stream) poc for moving SS creation to state listener
* remove reference to fake SS creation
* fix lint errors
* rename to virtual screenshare participants
* fix minor bugs
* rename participant subscriber to specify web support only
* fix: Drop duplicate call of wait for owner.
* fix: Fixes leaking listeners while waiting for host to join.
While waiting for the host to join on the dialog we attempt to join over and over again till we are admitted to enter the meeting. While doing that authRequired flag is on, and we were adding listeners on and on.
* feat: Introduces conference join in progress action.
This event is coming from lib-jitsi-meet and is fired when we receive the first presence of series when joining. It is always fired before joined event.
* fix: Moves testing middleware to use CONFERENCE_JOIN_IN_PROGRESS.
* fix: Moves follow-me middleware to use CONFERENCE_JOIN_IN_PROGRESS.
* fix: Moves some polls logic to middleware and use CONFERENCE_JOIN_IN_PROGRESS.
* fix: Moves reactions middleware to use CONFERENCE_JOIN_IN_PROGRESS.
* fix: Moves recordings middleware to use CONFERENCE_JOIN_IN_PROGRESS.
* fix: Moves shared-video middleware to use CONFERENCE_JOIN_IN_PROGRESS.
* fix: Moves videosipgw middleware to use CONFERENCE_JOIN_IN_PROGRESS.
* squash: Fix comments.
* fix: Fixes join in progress on web.
* fix: Moves variable extraction inside handlers.
* fix: Moves variable extraction inside handlers again.
* fix: Moves etherpad middleware to use CONFERENCE_JOIN_IN_PROGRESS.
In audio-only screenshare mode when there is no local audio track from mic present, do not add the audio track from screenshare to redux. Adding the track to redux will sync the track mute state to that of /base/media and show that the mic is unmuted even when that is not the case. Fixes https://github.com/jitsi/jitsi-meet/issues/10706.
* Pushes e2e pings to rtcstats
* linter fixes
* linter fixes
* Re-use existing event instead of introducing a new one.
* Don't update the connection info popup stats when the e2e rtt changes.
* Bumps ljm version to the latest
* e2e pings should work on mobile
* tweak the e2eRttChanged action properties
* fixes comments
* feat(multi-stream-support) Add screenshare as a second video track to the call.
This feature is behind a sendMultipleVideoStreams config.js flag. sourceNameSignaling flag also needs to enabled. Sending multiple tracks is currently supported only on endpoints running in unified plan mode. However, clients with source-name signaling enabled and running in plan-b can still receive multiple streams .
* squash: check if there is an existing track before adding camera/desktop
* squash: enable multi-stream only on unified plan endpoints.
Make the behavior consistent with enabling camera when the user is audio-only mode. Also fixes an issue where the screenshare preview doesn't appear if it is enabled while the user is in audio-only mode.
This is a follow up for https://github.com/jitsi/lib-jitsi-meet/pull/1944. This is needed to avoid sending a soure-remove followed by a source-add for the same ssrc. This happens when a users mutes camera->starts SS->stops SS->turns on camera on a p2p connection in Unified plan mode. Chrome fails to render the media if the same SSRC is removed and added back to the same m-line.
* feat: Handles hidden-from-recorder from jwt.
Hides the participant that has this flag in jwt from the recorder. A hidden meeting moderator.
Makes sure follows me works and no tracks are being added.
* squash: Skips showing notification when disabling
local audio and video.
* squash: Fixes comments.
* squash: Updates with ljm changes.
Added config to choose between recording and always mode
Created function to check if feature should be used
Removed check from stop feature as it now checks if the feature was previously on
Only get video track on feature start
In case of slow resolving gUM, we can join the call (quickly joining from pre-join screen) and the gUM will be resolved after we receive the start A/V muted from jicofo and will produce a source-add, joining unmuted ignoring jicofo.
* feat(tracks) Clean up the track if a source addition is rejected.
When jicofo rejects a source-add because of sendder limits, dispose and remove the local track from the conference.
* chore(deps) update LJM to latest.
* fix(screenshot-capture) Update screenshot capture feature
Add participants jid list to request
Enable screenshot capture only when recording is also on
Updated interval
* feat(conference) Impl audio/video mute disable when sender limit is reached.
Jicofo sends a presence when the audio/video sender limit is reached in the conference. The client can then proceed to disable the audio and video mute buttons when this occurs.
* squash: use a different action type and show notification.
- implement breakout-rooms
- integrated into the participants panel
- managed by moderators
- moderators can send participants to breakout-rooms
- participants can join breakout rooms by themselve
- participants can leave breakout rooms anytime
Co-authored-by: Robert Pintilii <robert.pin9@gmail.com>
Co-authored-by: Saúl Ibarra Corretgé <saghul@jitsi.org>
* Initial implementation; Happy flow
* Maybe revert this
* Functional prototype
* feat(facial-expressions): get stream when changing background effect and use presenter effect with camera
* add(facial-expressions): array that stores the expressions durin the meeting
* refactor(facial-expressions): capture imagebitmap from stream with imagecapture api
* add(speaker-stats): expression label
* fix(facial-expression): expression store
* revert: expression leabel on speaker stats
* add(facial-expressions): broadcast of expression when it changes
* feat: facial expression handling on prosody
* fix(facial-expressions): get the right track when opening and closing camera
* add(speaker-stats): facial expression column
* fix(facial-expressions): allow to start facial recognition only after joining conference
* fix(mod_speakerstats_component): storing last emotion in speaker stats component and sending it
* chore(facial-expressions): change detection from 2000ms to 1000ms
* add(facial-expressions): send expression to server when there is only one participant
* feat(facial-expressions): store expresions as a timeline
* feat(mod_speakerstats_component): store facial expresions as a timeline
* fix(facial-expressions): stop facial recognition only when muting video track
* fix(facial-expressions): presenter mode get right track to detect face
* add: polyfils for image capture for firefox and safari
* refactor(facial-expressions): store expressions by counting them in a map
* chore(facial-expressions): remove manually assigning the backend for tenserflowjs
* feat(facial-expressions): move face-api from main thread to web worker
* fix(facial-expressions): make feature work on firefox and safari
* feat(facial-expressions): camera time tracker
* feat(facial-expressions): camera time tracker in prosody
* add(facial-expressions): expressions time as TimeElapsed object in speaker stats
* fix(facial-expresions): lower the frequency of detection when tf uses cpu backend
* add(facial-expressions): duration to the expression and send it with durantion when it is done
* fix(facial-expressions): prosody speaker stats covert fro string to number and bool values set by xmpp
* refactor(facial-expressions): change expressions labels from text to emoji
* refactor(facial-expressions): remove camera time tracker
* add(facial-expressions): detection time interval
* chore(facial-expressions): add docs and minor refactor of the code
* refactor(facial-expressions): put timeout in worker and remove set interval in main thread
* feat(facial-expressions): disable feature in the config
* add(facial-expressions): tooltips of labels in speaker stats
* refactor(facial-expressions): send facial expressions function and remove some unused functions and console logs
* refactor(facial-expressions): rename action type when a change is done to the track by the virtual backgrounds to be used in facial expressions middleware
* chore(facial-expressions): order imports and format some code
* fix(facial-expressions): rebase issues with newer master
* fix(facial-expressions): package-lock.json
* fix(facial-expression): add commented default value of disableFacialRecognition flag and short description
* fix(facial-expressions): change disableFacialRecognition to enableFacialRecognition flag in config
* fix: resources load-test package-lock.json
* fix(facial-expressions): set and get facial expressions only if facial recognition enabled
* add: facial recognition resources folder in .eslintignore
* chore: package-lock update
* fix: package-lock.json
* fix(facial-expressions): gpu memory leak in the web worker
* fix(facial-expressions): set cpu time interval for detection to 6000ms
* chore(speaker-stats): fix indentation
* chore(facial-expressions): remove empty lines between comments and type declarations
* fix(facial-expressions): remove camera timetracker
* fix(facial-expressions): remove facialRecognitionAllowed flag
* fix(facial-expressions): remove sending interval time to worker
* refactor(facial-expression): middleware
* fix(facial-expression): end tensor scope after setting backend
* fix(facial-expressions): sending info back to worker only on facial expression message
* fix: lint errors
* refactor(facial-expressions): bundle web worker using webpack
* fix: deploy-facial-expressions command in makefile
* chore: fix load test package-lock.json and package.json
* chore: sync package-lock.json
Co-authored-by: Mihai-Andrei Uscat <mihai.uscat@8x8.com>
There are hard to handle race conditions around
screensharing/presenter mode turning on/off. It's
easier to make a NO-OP for turning the camera on/off
while switching to screen sharing is in progress
than trying to handle this gracefully. It should be okay
for the user to click the button again after the switch
operation is done.
Ideally this logic will be re-implemented in redux
middlewares and moved out of the conference.js file.