Up until now we relied on implicit loading of middlewares and reducers, through
having imports in each feature's index.js.
This leads to many complex import cycles which result in (sometimes) hard to fix
bugs in addition to (often) breaking mobile because a web-only feature gets
imported on mobile too, thanks to the implicit loading.
This PR changes that to make the process explicit. Both middlewares and reducers
are imported in a single place, the app entrypoint. They have been divided into
3 categories: any, web and native, which represent each of the platforms
respectively.
Ideally no feature should have an index.js exporting actions, action types and
components, but that's a larger ordeal, so this is just the first step in
getting there. In order to both set example and avoid large cycles the app
feature has been refactored to not have an idex.js itself.
It's an evolution of audio-only mode, where we also allow for receiving a remote
screen-share.
Diving deeper: this basically sets last N to 1 or 0 depending on the
availability of a screen-share.
This refactors all handling of audio-only and last N to 2 features in preparation
for "low bandwidth mode".
The main motivation to do this is that lastN is a "global" setting so it helps
to have all processing for it in a single place.
* Updates kick showing who kicked us.
* Notify participants that someone was kicked.
* Shows notification to user who is remotely muted.
* Updates the notification type.
* Muted by notification for mobile.
* Moves code to react and adds the kick notifications to mobile.
* Updates lib-jitsi-meet.
Using anything non-serializable for action types is discouraged:
https://redux.js.org/faq/actions#actions
In fact, this is the Flow definition for dispatching actions:
declare export type DispatchAPI<A> = (action: A) => A;
declare export type Dispatch<A: { type: $Subtype<string> }> = DispatchAPI<A>;
Note how the `type` field is defined as a subtype of string, which Symbol isn’t.
Updating react-native-fast-image brings a couple of interesting changes:
- onLoad is not called for cached images (reported and ignored upstream)
- load progress not working if component not displayed (on Android)
In order to fix this, a combination of 2 approaches was used:
- onLoadEnd / onError are used to detect if the image is loaded
- off-screen rendering is used on Android to get progress events
While implementing the above, yours truly noticed the complexity was increasing
way too much, so some extra refactoring was also performed:
- componentWillReceiveProps is dropped
- an auxiliary component (AvatarContent) is used for the actual content of the
Avatar, with the former passing the key prop to the latter
Using the key prop ensures AvatarContent will be recreated if the URI changes,
which is not a bad idea anyway, since the new image needs to be downloaded.
Filmstrip remote thumbnails display under certain conditions, as
defined in filmstrip/functions.web.js. Previously the raw
participant count was used, which included fake participants.
Using the selector getParticipantCount excludes fake participants,
causing YouTube thumbnails to remain hidden in a 1-on-1 call.
Use react-native-fastimage, which uses 2 full-native image impleentations using
well known and mature (native) libraries.
This gets us rid of 2 libraries which were observerd as a source of bugs and
created trouble with dependencies: react-native-fetch-blob and
react-native-img-cache. They are also no longer well maintained.
* [WEB] add UI for transcription
* add analytics event for button, do not use global APP object
* use props instead of state, use local conference to kick participant
* put imports in alphabetical order
* add translation for TranscribingLabel
* fix merge conflict
* add closed caption button
* purge OverFlowMenuItem which starts and stops Transcription
* readd closed caption icon and fix small issues due to purge
* delete unused icon in _font.scss