Simplify the code by using a bitfied instead of a couple of boolean flags. This
allows us to mute the video from multiple places and only make the unmute
effective once they have all unmuted.
Alas, this cannot be applied to the web without a massive refactor, because it
uses the track muted state as the source of truth instead of the media state.
Make toolbar's microphone button enabled whenever there are any
'audioinput' devices available and allow to add audio during
the conference even if microphone permissions were denied on startup.
Dynamically enables/disables the toolbar video button. Prior to that
commit if we would start with no video there would be no way to enable
it later on.
Use the curstom _switchCamera API provided by react-native-webrtc to toggle the
camera instead of destroying the current track and creating a new one.
_switchCamera is implemented at a low level, so the track perceives no changes,
thus being a lot faster and less involved since the capturer doesn't need to be
destroyed and re-created.
In addition, don't mirror the video for the back camera.
Ref: https://github.com/oney/react-native-webrtc/pull/235
As an intermediate step on the path to merging jitsi-meet and
jitsi-meet-react, import the whole source code of jitsi-meet-react as it
stands at
2f23d98424
i.e. the lastest master at the time of this import. No modifications are
applied to the imported source code in order to preserve a complete
snapshot of it in the repository of jitsi-meet and, thus, facilitate
comparison later on. Consequently, the source code of jitsi-meet and/or
jitsi-meet-react may not work. For example, jitsi-meet's jshint may be
unable to parse jitsi-meet-react's source code.