Up until now we relied on implicit loading of middlewares and reducers, through
having imports in each feature's index.js.
This leads to many complex import cycles which result in (sometimes) hard to fix
bugs in addition to (often) breaking mobile because a web-only feature gets
imported on mobile too, thanks to the implicit loading.
This PR changes that to make the process explicit. Both middlewares and reducers
are imported in a single place, the app entrypoint. They have been divided into
3 categories: any, web and native, which represent each of the platforms
respectively.
Ideally no feature should have an index.js exporting actions, action types and
components, but that's a larger ordeal, so this is just the first step in
getting there. In order to both set example and avoid large cycles the app
feature has been refactored to not have an idex.js itself.
This refactors all handling of audio-only and last N to 2 features in preparation
for "low bandwidth mode".
The main motivation to do this is that lastN is a "global" setting so it helps
to have all processing for it in a single place.
Using anything non-serializable for action types is discouraged:
https://redux.js.org/faq/actions#actions
In fact, this is the Flow definition for dispatching actions:
declare export type DispatchAPI<A> = (action: A) => A;
declare export type Dispatch<A: { type: $Subtype<string> }> = DispatchAPI<A>;
Note how the `type` field is defined as a subtype of string, which Symbol isn’t.
BaseApp does all the heavy-lifting related to creating the redux store,
navigation, and so on.
App currently handles URL props and actually triggering navigation based on
them.
On Android we go into "immersive mode" when in a conference, this is our way of
being full-creen. There are occasions, however, in which Android takes us out of
immerfive mode without us (the application / SDK) knowing: when a child activity
is started, a modal window shown, etc.
In order to be resilient to any possible change in the immersive mode, register
a listener which will be called when Android changes it, so we can re-eavluate
if we need it and thus re-enable it.
Simplify the code by using a bitfied instead of a couple of boolean flags. This
allows us to mute the video from multiple places and only make the unmute
effective once they have all unmuted.
Alas, this cannot be applied to the web without a massive refactor, because it
uses the track muted state as the source of truth instead of the media state.
When a dialog is opened on Android, full-screen mode is exited but we (the app)
know nothing about this. Make sure we re-enter full-screen mode once a dialog is
closed, if the conditions to be in such mode are still met.
The behavior can be triggered with the toggleAudioOnly action, which is
currently fired with a button.
The following aspects of the conference will change when in audio only mode:
- local video is muted
- last N is set to 0 (effectively muting remote video)
- full-screen mode is exited
- audio mode is set to "audio chat" (default output is the earpiece)
- the wake lock is disengaged
One aspect not handled in this patch is disabling the video mute button while in
audio only mode. The user should not be able to turn back video on in that case.