* Pushes e2e pings to rtcstats
* linter fixes
* linter fixes
* Re-use existing event instead of introducing a new one.
* Don't update the connection info popup stats when the e2e rtt changes.
* Bumps ljm version to the latest
* e2e pings should work on mobile
* tweak the e2eRttChanged action properties
* fixes comments
* feat(multi-stream-support) Add screenshare as a second video track to the call.
This feature is behind a sendMultipleVideoStreams config.js flag. sourceNameSignaling flag also needs to enabled. Sending multiple tracks is currently supported only on endpoints running in unified plan mode. However, clients with source-name signaling enabled and running in plan-b can still receive multiple streams .
* squash: check if there is an existing track before adding camera/desktop
* squash: enable multi-stream only on unified plan endpoints.
Make the behavior consistent with enabling camera when the user is audio-only mode. Also fixes an issue where the screenshare preview doesn't appear if it is enabled while the user is in audio-only mode.
This is a follow up for https://github.com/jitsi/lib-jitsi-meet/pull/1944. This is needed to avoid sending a soure-remove followed by a source-add for the same ssrc. This happens when a users mutes camera->starts SS->stops SS->turns on camera on a p2p connection in Unified plan mode. Chrome fails to render the media if the same SSRC is removed and added back to the same m-line.
* feat: Handles hidden-from-recorder from jwt.
Hides the participant that has this flag in jwt from the recorder. A hidden meeting moderator.
Makes sure follows me works and no tracks are being added.
* squash: Skips showing notification when disabling
local audio and video.
* squash: Fixes comments.
* squash: Updates with ljm changes.
Added config to choose between recording and always mode
Created function to check if feature should be used
Removed check from stop feature as it now checks if the feature was previously on
Only get video track on feature start
In case of slow resolving gUM, we can join the call (quickly joining from pre-join screen) and the gUM will be resolved after we receive the start A/V muted from jicofo and will produce a source-add, joining unmuted ignoring jicofo.
* feat(tracks) Clean up the track if a source addition is rejected.
When jicofo rejects a source-add because of sendder limits, dispose and remove the local track from the conference.
* chore(deps) update LJM to latest.
* fix(screenshot-capture) Update screenshot capture feature
Add participants jid list to request
Enable screenshot capture only when recording is also on
Updated interval
* feat(conference) Impl audio/video mute disable when sender limit is reached.
Jicofo sends a presence when the audio/video sender limit is reached in the conference. The client can then proceed to disable the audio and video mute buttons when this occurs.
* squash: use a different action type and show notification.
- implement breakout-rooms
- integrated into the participants panel
- managed by moderators
- moderators can send participants to breakout-rooms
- participants can join breakout rooms by themselve
- participants can leave breakout rooms anytime
Co-authored-by: Robert Pintilii <robert.pin9@gmail.com>
Co-authored-by: Saúl Ibarra Corretgé <saghul@jitsi.org>
* Initial implementation; Happy flow
* Maybe revert this
* Functional prototype
* feat(facial-expressions): get stream when changing background effect and use presenter effect with camera
* add(facial-expressions): array that stores the expressions durin the meeting
* refactor(facial-expressions): capture imagebitmap from stream with imagecapture api
* add(speaker-stats): expression label
* fix(facial-expression): expression store
* revert: expression leabel on speaker stats
* add(facial-expressions): broadcast of expression when it changes
* feat: facial expression handling on prosody
* fix(facial-expressions): get the right track when opening and closing camera
* add(speaker-stats): facial expression column
* fix(facial-expressions): allow to start facial recognition only after joining conference
* fix(mod_speakerstats_component): storing last emotion in speaker stats component and sending it
* chore(facial-expressions): change detection from 2000ms to 1000ms
* add(facial-expressions): send expression to server when there is only one participant
* feat(facial-expressions): store expresions as a timeline
* feat(mod_speakerstats_component): store facial expresions as a timeline
* fix(facial-expressions): stop facial recognition only when muting video track
* fix(facial-expressions): presenter mode get right track to detect face
* add: polyfils for image capture for firefox and safari
* refactor(facial-expressions): store expressions by counting them in a map
* chore(facial-expressions): remove manually assigning the backend for tenserflowjs
* feat(facial-expressions): move face-api from main thread to web worker
* fix(facial-expressions): make feature work on firefox and safari
* feat(facial-expressions): camera time tracker
* feat(facial-expressions): camera time tracker in prosody
* add(facial-expressions): expressions time as TimeElapsed object in speaker stats
* fix(facial-expresions): lower the frequency of detection when tf uses cpu backend
* add(facial-expressions): duration to the expression and send it with durantion when it is done
* fix(facial-expressions): prosody speaker stats covert fro string to number and bool values set by xmpp
* refactor(facial-expressions): change expressions labels from text to emoji
* refactor(facial-expressions): remove camera time tracker
* add(facial-expressions): detection time interval
* chore(facial-expressions): add docs and minor refactor of the code
* refactor(facial-expressions): put timeout in worker and remove set interval in main thread
* feat(facial-expressions): disable feature in the config
* add(facial-expressions): tooltips of labels in speaker stats
* refactor(facial-expressions): send facial expressions function and remove some unused functions and console logs
* refactor(facial-expressions): rename action type when a change is done to the track by the virtual backgrounds to be used in facial expressions middleware
* chore(facial-expressions): order imports and format some code
* fix(facial-expressions): rebase issues with newer master
* fix(facial-expressions): package-lock.json
* fix(facial-expression): add commented default value of disableFacialRecognition flag and short description
* fix(facial-expressions): change disableFacialRecognition to enableFacialRecognition flag in config
* fix: resources load-test package-lock.json
* fix(facial-expressions): set and get facial expressions only if facial recognition enabled
* add: facial recognition resources folder in .eslintignore
* chore: package-lock update
* fix: package-lock.json
* fix(facial-expressions): gpu memory leak in the web worker
* fix(facial-expressions): set cpu time interval for detection to 6000ms
* chore(speaker-stats): fix indentation
* chore(facial-expressions): remove empty lines between comments and type declarations
* fix(facial-expressions): remove camera timetracker
* fix(facial-expressions): remove facialRecognitionAllowed flag
* fix(facial-expressions): remove sending interval time to worker
* refactor(facial-expression): middleware
* fix(facial-expression): end tensor scope after setting backend
* fix(facial-expressions): sending info back to worker only on facial expression message
* fix: lint errors
* refactor(facial-expressions): bundle web worker using webpack
* fix: deploy-facial-expressions command in makefile
* chore: fix load test package-lock.json and package.json
* chore: sync package-lock.json
Co-authored-by: Mihai-Andrei Uscat <mihai.uscat@8x8.com>
There are hard to handle race conditions around
screensharing/presenter mode turning on/off. It's
easier to make a NO-OP for turning the camera on/off
while switching to screen sharing is in progress
than trying to handle this gracefully. It should be okay
for the user to click the button again after the switch
operation is done.
Ideally this logic will be re-implemented in redux
middlewares and moved out of the conference.js file.
* feat: Hides prejoin screen on conference in progress event.
We enter the conference view as early as possible on conference in progress as the joined event can be late in a big conference.
Also, we show conference view only when joining is in progress, for example, the with the lobby enabled where we try to join but fail, we do not want to show the conference view for a fraction of a second before showing lobby screen.
* squash: Drops CONFERENCE_JOIN_IN_PROGRESS.
* squash: Updates ljm with the new JitsiConference event.
* squash: Adds some debugs to the github action.
Easier to catch problems with package-lock.json file.
This fixes an issue where Safari users cannot hear remote audio if they join audio/video muted. The browser throws the following error when the application tries to execute play on the audio element: 'NotAllowedError: The request is not allowed by the user agent or the platform in the current context, possibly because the user denied permission.' This started happening in Safari 15.
Changed screen capture to non effect. Effects are used to alter the stream, this feature does not need to alter the stream, it just needs access to it
Changed image diff library. Previous library diff’ed the whole image, the new one has en early return threshold
Use ImageCaptureAPI to take the screenshot. Added polyfill for it and polyfill for createImageBitmap
Added analytics
* Update moderation in effect notifications
Only display one notification for each media type. Display notification for keyboard shortcuts as well
* Update muted remotely notification
Display name of moderator in the notification
* Fix indentation on moderation menu
* Update text for video moderation
* Added moderator label in participant pane
* Update microphone icon in participant list
For participants that speak, or are noisy, but aren't dominant speaker, the icon in the participant list will look the same as the dominant speaker icon but will not change their position in the list
* Added sound for asked to unmute notification
* Code review changes
* Code review changes
Use simple var instead of function for audio media state
* Move constants to constants file
* Moved constants from notifications to av-moderation
* do not use this.local video
* move tracks initialized flag around
* do not use this.localAudio
* untangle use audio/video stream methods
It should be safe to call setVideoMuteStatus and
setAudioMuteStatus regardless of the prejoin page
visibility state.
* add NO-OP to use track methods and fix crash
in _setLocalAudioVideoStreams on not a promise
* use allSettled
On mobile Safari, when a user joins both audio and video muted, browser doesn't playout the remote audio because of a webkit bug. As a workaround, always add the audio track to peerconnection and then mute the track if needed.
* feat(Filmstrip): Reorder the visible participants in the filmstrip.
The participants are ordered alphabetically and the endpoints with screenshares, shared-videos and dominant speakers (in that order) are bumped to the top of the list. The local participant is also moved to the top left corner as opposed to the bottom right corner.
* squash: Implement review comments.
* squash: store alphabetically sorted list in redux and move shared videos to top.
* squash: Use the DEFAULT_REMOTE_DISPLAY_NAME from interfaceConfig for users without a display name.
Some options were missing on the mobile side, notably calltsts
enableDisplayNameInStats and enableEmailInStats. Now the same logic will be used
in web and mobile.