* fix(multi-stream) update selector to find ss track by videoType or mediaType
* ref(multi-stream) move fake ss creation logic and support video type changed
* refactor(multi-stream) decouple sending and receiving multiple screenshare streams
* fix(multi-stream) fix receiver constraints with signaling and without multi-stream
* fix(mutli-stream) ensure plan b original SS thumbnail displays avatar
* fix(multi-stream) show fake SS for plan b sender
* refactor(multi-stream) poc for moving SS creation to state listener
* remove reference to fake SS creation
* fix lint errors
* rename to virtual screenshare participants
* fix minor bugs
* rename participant subscriber to specify web support only
prioritize participants with screen shares
support local screen share track
auto pin screen share
support screen share for large video
ensure fake screen share participants are sorted
fix local screen share in vertical filmstrip
fix local screen share in tile mode
use FakeScreenShareParticipant component for screen share thumbnails
ensure changes are behind feature flag and update jsdocs
fix bug where local screen share was not rendering
update receiver constraints to include SS source names
remove fake ss participant creation on track update
fix: handle screenshare muted change and track removal
refactor: update key values for sortedFakeScreenShareParticipants
address PR comments
refactor getter for screenshare tracks
rename state to sortedRemoteFakeScreenShareParticipants
* feat(multi-stream-support) Add screenshare as a second video track to the call.
This feature is behind a sendMultipleVideoStreams config.js flag. sourceNameSignaling flag also needs to enabled. Sending multiple tracks is currently supported only on endpoints running in unified plan mode. However, clients with source-name signaling enabled and running in plan-b can still receive multiple streams .
* squash: check if there is an existing track before adding camera/desktop
* squash: enable multi-stream only on unified plan endpoints.
* feat(tracks) Clean up the track if a source addition is rejected.
When jicofo rejects a source-add because of sendder limits, dispose and remove the local track from the conference.
* chore(deps) update LJM to latest.
In iOS 15 we observe that not creating the audio track early may result in not
getting audio after unmuting for the first time.
Creating the audio track early means the first unmute doesn't need to add the
track to the conference, resulting in a much faster operation.
Note that creating the track early doesn't mean we will start unmuted, the track
will be muted.
* feat: Exposes a method for checking is remote track received and played.
Used for some tests in torture.
* squash: Drop not matching string.
Duplicate translation key with not matching content.
* squash: Moves torture specific functions to features/base/testing.
Listens for media events from the video tag of the large video and stores them in redux.
* squash: Fix comments.
* feat: Listens for media events from the video tag of the remote videos and stores them in redux.
* squash: Fix undefined videoTrack if between switches.
React Native doesn't define __filename nor __dirname so do it artisanally. In
addition, this helps with centralizing the configuration passed to loggers.
Zoltan Bettenbuk suggested the following:
const state = getState();
if (desiredTypes.length === 0) {
- const { audio, video } = state['features/base/media'];
-
- audio.muted || desiredTypes.push(MEDIA_TYPE.AUDIO);
- video.muted || desiredTypes.push(MEDIA_TYPE.VIDEO);
+ const startAudioOnly = getPropertyValue(state, 'startAudioOnly');
+ const startWithAudioMuted
+ = getPropertyValue(state, 'startWithAudioMuted');
+ const startWithVideoMuted
+ = getPropertyValue(state, 'startWithVideoMuted');
+
+ if (!startWithAudioMuted) {
+ desiredTypes.push(MEDIA_TYPE.AUDIO);
+ }
+ if (!startAudioOnly && !startWithVideoMuted) {
+ desiredTypes.push(MEDIA_TYPE.VIDEO);
+ }
}
const availableTypes
The final commit is really a different implementation of the same idea
but takes into account that the state of base/media already contains the
intent of the URL and notices the delay in the realization of the
background app state.
Additionally, unbreaks one more case where setAudioOnly is incorrectly
dispatched on CONFERENCE_LEFT or CONFERENCE_FAILED and, consequently,
overrides the intent of the URL.
* ref: Restructures the pinned/unpinned events.
* ref: Refactors the "audio only disabled" event.
* ref: Refactors the "stream switch delay" event.
* ref: Refactors the "select participant failed" event.
* ref: Refactors the "initially muted" events.
* ref: Refactors the screen sharing started/stopped events.
* ref: Restructures the "device list changed" events.
* ref: Restructures the "shared video" events.
* ref: Restructures the "start muted" events.
* ref: Restructures the "start audio only" event.
* ref: Restructures the "sync track state" event.
* ref: Restructures the "callkit" events.
* ref: Restructures the "replace track".
* ref: Restructures keyboard shortcuts events.
* ref: Restructures most of the toolbar events.
* ref: Refactors the API events.
* ref: Restructures the video quality, profile button and invite dialog events.
* ref: Refactors the "device changed" events.
* ref: Refactors the page reload event.
* ref: Removes an unused function.
* ref: Removes a method which is needlessly exposed under a different name.
* ref: Refactors the events from the remote video menu.
* ref: Refactors the events from the profile pane.
* ref: Restructures the recording-related events.
Removes events fired when recording with something other than jibri
(which isn't currently supported anyway).
* ref: Cleans up AnalyticsEvents.js.
* ref: Removes an unused function and adds documentation.
* feat: Adds events for all API calls.
* fix: Addresses feedback.
* fix: Brings back mistakenly removed code.
* fix: Simplifies code and fixes a bug in toggleFilmstrip
when the 'visible' parameter is defined.
* feat: Removes the resolution change application log.
* ref: Uses consistent naming for events' attributes.
Uses "_" as a separator instead of camel case or ".".
* ref: Don't add the user agent and conference name
as permanent properties. The library does this on its own now.
* ref: Adapts the GA handler to changes in lib-jitsi-meet.
* ref: Removes unused fields from the analytics handler initializaiton.
* ref: Renames the google analytics file and add docs.
* fix: Fixes the push-to-talk events and logs.
* npm: Updates lib-jitsi-meet to 515374c8d383cb17df8ed76427e6f0fb5ea6ff1e.
* fix: Fixes a recently introduced bug in the google analytics handler.
* ref: Uses "value" instead of "delay" since this is friendlier to GA.
While reviewing "[PREVIEW|RN]: Handle getUserMedia in progress" I
discovered JSDoc comments which could be improved. They are not
necessarily 100% related to the PR.
This commit adds extra actions/Redux state to be able to deal with
the GUM operation being in progress. There will be early local track
stub in the Redux state for any a local track for which GUM has been
called, but not completed yet.
Local track is considered valid only after TRACK_ADDED event when it
will have JitsiLocalTrack instance set.
* Javadoc introduced @code as a replacement of <code> and <tt> which is
better aligned with other javadoc tags such as @link. Use it in the
Java source code. If we switch to Kotlin, then we'll definitely use
Markdown.
* There are more uses of @code in the JavaScript source code than <tt>
so use @code for the sake of consistency. Eventually, I'd rather we
switch to Markdown because it's easier on my eyes.
* Xcode is plain confused by @code and @link. The Internet says that
Xcode supports the backquote character to denote the beginning and end
of a string of characters which should be formatted for display as
code but it doesn't work for me. <tt> is not rendered at all. So use
the backquote which is rendered itself. Hopefully, if we switch to
Markdown, then it'll be common between JavaScript and Objective-C
source code.
When do we need tracks?
- Welcome page (only the video track)
- Conference (depends if starting with audio / video muted is requested)
When do we need to destroy the tracks?
- When we are not in a conference and there is no welcome page
In order to accommodate all the above use cases, a new component is introduced:
BlankWelcomePage. Its purpose is to take the place of the welcome page when it
is disabled. When this component is mounted local tracks are destroyed.
Analogously, a video track is created when the (real) welcome page is created,
and all the desired tracks are created then the Conference component is created.
What are desired tracks? These are the tracks we'd like to use for the
conference that is about to happen. By default both audio and video are desired.
It's possible, however, the user requested to start the call with no
video/audio, in which case it's muted in base/media and a track is not created.
The first time the app starts (with the welcome page) it will request permission
for video only, since there is no need for audio in the welcome page. Later,
when a conference is joined permission for audio will be requested when an audio
track is to be created. The audio track is not destroyed when the conference
ends. Yours truly thinks this is not needed since it's a stopped track which is
not using system resources.
* ref: video muted state
Get rid of 'videoMuted' flag in conference.js
* ref: audio muted state
Get rid of 'audioMuted' flag in conference.js
* fix(conference.js|API): early audio/video muted updates
* ref(conference.js): rename isVideoMuted
Rename isVideoMuted to isLocalVideoMuted to be consistent with
isLocalAudioMuted.
* doc|style(conference.js): comments and space after if
* ref: move 'setTrackMuted' to functions
* fix(tracks/middleware): no-lonely-if
* ref(features/toolbox): get rid of last argument
* ref(defaultToolbarButtons): rename var
Simplify the code by using a bitfied instead of a couple of boolean flags. This
allows us to mute the video from multiple places and only make the unmute
effective once they have all unmuted.
Alas, this cannot be applied to the web without a massive refactor, because it
uses the track muted state as the source of truth instead of the media state.
Listeners were set for when a track muted or changed its video
type, but the listeners were never removed. This would could
cause events to keep firing on the removed tracks, which would
cause redux to fire and error because the tracks were no longer
known. That the tracks still fire events after removal is
another issue...