This adds an option to disable video autoplay that will be used mostly with maleus (our selenium-based load testing tool for testing the new bridge). Disabling video rendering lowers the resource utilisation of the selenium nodes.
Bring back the workaround introduced in afd2aea7
but removed in 21dcc41d. On conference join,
several other actions have already been fired
that try to set the large video participant
and select the participant on the bridge.
The problem is there is no conference during
these actions so the select participant
never fires. Then subsequent actions do not
fire select participant because the large
video participant has not changed.
Using anything non-serializable for action types is discouraged:
https://redux.js.org/faq/actions#actions
In fact, this is the Flow definition for dispatching actions:
declare export type DispatchAPI<A> = (action: A) => A;
declare export type Dispatch<A: { type: $Subtype<string> }> = DispatchAPI<A>;
Note how the `type` field is defined as a subtype of string, which Symbol isn’t.
* fix(large-video): do not show avatar if no url
By default the large video dominant speaker avatar
has an empty src, which will result in a broken
image displaying. There is also disconnect with
non-react code trying to set an undefined src.
To prevent such until local avatar generation
work is done in the future, just don't show the
avatar.
* fix(conference): set the room instance earlier
Set the room instance on APP.conference before triggering
a redux update of the conference being set,, because
middleware can then fire and call methods on APP.conference
that depend on the room being set.
* get local participant directly from store instead of from global
* feat(tile-view): initial implementation for mobile
- Create a tile view component for displaying thumbnails in a
two-dimensional grid.
- Update the existing TileViewButton so it shows a label in the
overflow menu.
- Modify conference so it can display TileView while hiding
Filmstrip.
- Modify Thumbnail so its width/height can be set and to prevent
pinning while in tile view mode.
* use style array for thumbnail styles
* change ternary to math.min for expressiveness
* use dimensiondetector
* pass explicit disableTint prop
* use makeAspectRatioAware instead of aspectRatio prop
* update docs
* fix docs again (fix laziest copy/paste job I've ever done)
* large-video: rename onPress prop to onClick
* change forEach to for...of
* use truthy check fallthrough logic instead of explicit if
* put tile view button second to last in menu
* move spacer to a constant
* the magical incantation to make flow shut up
* feat(tile-view): initial implementation for tile view
- Modify the classname on the app root so layout can adjust
depending on the desired layout mode--vertical filmstrip,
horizontal filmstrip, and tile view.
- Create a button for toggling tile view.
- Add a StateListenerRegistry to automatically update the
selected participant and max receiver frame height on tile
view toggle.
- Rezise thumbnails when switching in and out of tile view.
- Move the local video when switching in and out of tile view.
- Update reactified pieces of thumbnails when switching in and
out of tile view.
- Cap the max receiver video quality in tile view based on tile
size.
- Use CSS to hide UI components that should not display in tile
view.
- Signal follow me changes.
* change local video id for tests
* change approach: leverage more css
* squash: fix some formatting
* squash: prevent pinning, hide pin border in tile view
* squash: change logic for maxReceiverQuality due to sidestepping resizing logic
* squash: fix typo, columns configurable, remove unused constants
* squash: resize with js again
* squash: use yana's math for calculating tile size
* [WEB] add UI for transcription
* add analytics event for button, do not use global APP object
* use props instead of state, use local conference to kick participant
* put imports in alphabetical order
* add translation for TranscribingLabel
* fix merge conflict
* add closed caption button
* purge OverFlowMenuItem which starts and stops Transcription
* readd closed caption icon and fix small issues due to purge
* delete unused icon in _font.scss
* ref(large-video): combine selectParticipant logic from web
Currently native/middleware/redux has its own logic for selecting a participant
on the bridge. To have the logic web respect that logic, a few changes are
needed.
- Web no longer has its own call to selectParticipant.
- To keep in line with web logic selectParticipant action should act even when
there is no track. This makes it so that when a participant does get a track
that the bridge will send high quality. The bridge can already handle when the
selected participant does not have a video track.
- The timing of web is such that on joining an existing conference, a
participant joins and the participant's tracks get updated and then the
conference is joined. The result is selectParticipant does not get fired
because it no-ops when there is no conference. To avoid having to make
uncertain changes (to be lazy), update the selected participant on conference
join as well.
* squash: update comment, pass message to error handler
* Show subtitles when Jigasi sends transcription results in JSON
* fix: Import PropTypes from prop-types.
* apply feedback on initial PR
* Changed Object to Map, alphabetic ordering fixes ,css changes in transcription subtitles
* Sends Map of transcriptMessages as prop to Component
* Documentation fixes and uses config in redux state
* Minor doc fix
* rename feature 'transcription' to 'subtitles'
* Moves subtitles config to interfaceConfig and minor fixes
* minor lint fix
In the current middleware logic, when the local participant becomes
dominant speaker, a new participant can be selected to receive
high quality video from. This means large-video could potentially
do a switch to another participant when the local participant
becomes dominant speaker. Prevent such behavior.
* feat(recording): frontend logic can support live streaming and recording
Instead of either live streaming or recording, now both can live together. The
changes to facilitate such include the following:
- Killing the state storing in Recording.js. Instead state is stored in the lib
and updated in redux for labels to display the necessary state updates.
- Creating a new container, Labels, for recording labels. Previously labels were
manually created and positioned. The container can create a reasonable number
of labels and only the container itself needs to be positioned with CSS. The
VideoQualityLabel has been shoved into the container as well because it moves
along with the recording labels.
- The action for updating recording state has been modified to enable updating
an array of recording sessions to support having multiple sessions.
- Confirmation dialogs for stopping and starting a file recording session have
been created, as they previously were jquery modals opened by Recording.js.
- Toolbox.web displays live streaming and recording buttons based on
configuration instead of recording availability.
- VideoQualityLabel and RecordingLabel have been simplified to remove any
positioning logic, as the Labels container handles such.
- Previous recording state update logic has been moved into the RecordingLabel
component. Each RecordingLabel is in charge of displaying state for a
recording session. The display UX has been left alone.
- Sipgw availability is no longer broadcast so remove logic depending on its
state. Some moving around of code was necessary to get around linting errors
about the existing code being too deeply nested (even though I didn't touch
it).
* work around lib-jitsi-meet circular dependency issues
* refactor labels to use html base
* pass in translation keys to video quality label
* add video quality classnames for torture tests
* break up, rearrange recorder session update listener
* add comment about disabling startup resize animation
* rename session to sessionData
* chore(deps): update to latest lib for recording changes
To reduce the amount of motion that has to be blurred, use a canvas
to essentially set the FPS of the video background. This canvas
component is behind a temporary feature flag, as well as being able
to disable the blur, so it can be played around with on deployed
environments.
Since the main conference container is no longer "clickable" there must
be a way for clicking on the "large video". A clickable TestHint nested
in ParticipantView makes it easier for dealing with the fact that the
click handler is not always on the same component (required for the
pinch and zoom feature to work correctly).
TouchableWithoutFeedback and TouchableHighlight interfere with the
implementation of 'pinch to zoom' to come. We prepare for it by driving
the onClick/onPress handler(s) out of Conference, through LargeVideo and
ParticipantView into Video itself where the bulk of 'pinch to zoom' will
be implemented.
When in PiP mode the LargeView will not be large enough to hold the avatar (for
those interested in the details, our avatar's size is 200, and in PiP mode the
app is resized to about 150).
In order to solve it, this PR refactors how the avatar style is passed along,
reducing it to a single "size" prop. With this only prop, the Avatar compononent
will compute the width, height and borderRadius, plus deal with some Android
shenanigans.
In addition, the LargeView component now uses DimensionsDetector to check its
own size and adjust the size prop passed to the Avatar component as needed.
* ref(large-video): reactify background
This is pre-requisite work for disabling the background on
certain browsers, namely Firefox. By moving the component
to react, and in general encapsulating background logic,
selectively disabling the background will be easier.
The component was left for LargeVideo to update so it can
continue to coordinate update timing with the actual large
video display. If the background were moved completely into
react and redux with LargeVideo, then background updates would
occur before large video updates causing visual jank.
* fix(large-video): do not show background for Firefox and temasys
Firefox has performance issues with adding filter effects on
animated elements. On temasys, the background videos weren't
really displaying anyway.
* some props refactoring
Instead of passing in classes to LargeVideoBackground, rely on
explicit props. At some point LargeVideo will have to be reactified
and the relationsihp between it and LargeVideoBackground might
change, so for now make use of props to be explicit about
how LargeVideoBackground can be modified.
Also, set the jitsiTrack to display on LargeVideoBackground to
null if the background is not displayed. This was an existing
optimization, although previously done with pausing and playing.
* squash: use newly exposed RTCBrowserType
* squash: rebase and use new lib browser util
* squash: move hiding logic all into LargeVideo
* squash: remove hiding of background on stream change. hopefully doesnt break anything
The video will switch to the avatar and be tinted with gray. On the large view,
a text message indicating the user has connectivity issues will be shown.
Never show the local participant in the large view unless it's the only
participant.
This fixes 2 issues:
- selecting the local participant when the camera permission wasn't granted
- selecting the other participant when they join a 1-1 call with video muted
* Javadoc introduced @code as a replacement of <code> and <tt> which is
better aligned with other javadoc tags such as @link. Use it in the
Java source code. If we switch to Kotlin, then we'll definitely use
Markdown.
* There are more uses of @code in the JavaScript source code than <tt>
so use @code for the sake of consistency. Eventually, I'd rather we
switch to Markdown because it's easier on my eyes.
* Xcode is plain confused by @code and @link. The Internet says that
Xcode supports the backquote character to denote the beginning and end
of a string of characters which should be formatted for display as
code but it doesn't work for me. <tt> is not rendered at all. So use
the backquote which is rendered itself. Hopefully, if we switch to
Markdown, then it'll be common between JavaScript and Objective-C
source code.