Don't require autofocus because that prevents the app from appearing in
Google Play Store for some devices.
Don't require camera for the same reason but also because camera/video
is not a mandatory feature of the app, it's merely likely very
desirable.
JitsiMeetViewListener is an integral part of the public API of Jitsi
Meet SDK for Android. Utilize it in the Debug configuration of the Jitsi
Meet app for Android in order to increase (1) awareness of API breakages
and (2) API coverage.
The same goes for JitsiMeetViewDelegate in Jitsi Meet SDK and app for
iOS.
The error is stored in the redux store in base/config so other components can
consult it. It is also broadcasted as a new event in the external API for the
SDK.
JitsiMeetViewListener currently has methods of one and the same pattern
so adding new methods i.e. events i.e. redux action types is a question
of repetition in the Java source code. Speed up the support of new
events by trying to deal with them in a generic way.
The same goes for JitsiMeetViewDelegate.
It's a global action, and if we do that other applications won't be able to use
it. I experienced this with the system camera. We do, however, make sure to
enable it when we need to.
Note that enabling it doesn't mean we are *using* it. It just means we *can*,
and that we will get actual audio when we do.
Avatars are cached to the filesystem and loaded from there when requested again.
The cache is cleaned after a conference ends and on application startup
(defensive move).
In addition, implement a fully local avatar system, which is used as a fallback
when loading a remote avatar fails. It can also be forced using a prop.
The fully local avatars use a user icon as a mask and apply a background color
qhich is picked by hashing the URI passed to the avatar. If no URI is passed a
random color is chosen.
A grace period of 1 second is also implemented so a default local avatar will be
rendered if an Avatar component is mounted but has no URI. If a URI is specified
later on, it will be loaded and displayed. In case loading the remote avatar
fails, the locally generated one will be used.
In iOS this is automagically done in the view destructor, bunt we don't have
that luxury in Java we have to do it manually.
The new disponse() method MUST be called when the Activity holding the view is
going to be destroyed, typically in the onDestroy() handler.
Introduces loadURLObject in JitsiMeetView on Android and iOS which
accepts a Bundle and NSDictionary, respectively, similar in structure to
the JS object accepted by the constructor of Web's ExternalAPI. At this
time, only the property url of the bundle/dictionary is supported.
However, it allows the public API of loadURLObject to be consumed. The
property url will be made optional in the future and other properties
will be supported from which a URL will be constructed.
Initializing a new URL/NSURL instance is a chore especially when one
takes into account that the JavaScript side (1) is loading the URL
asynchronously and (2) is capable of parsing strings that may or may not
be represented as URL/NSURL.
The Android method loadURLString(String) may have been called
loadURL(String) to overload loadURL(URL) but I didn't want to do that
because:
1. It would not be compatible with existing source code such as
loadURL(null) which would have become ambiguous.
2. I wanted to achieve better convergence with the iOS API.
This was already an implicit requirement, as it's the only version implemented
in libwebrtc.
The reason to add this is to (defensively) try to filter old devices which may
not implement it.
WebRTC's own Android demo app defines this.
This reverts commit c9a29153dd.
Now that react-native-webrtc supports the permissions system in 23, use it since
it provides a more pleasant experience to users.
In addition, fix a bug in the previous code: the React Native view must be
loaded after we have acquired the permission to draw on top of other apps,
otherwise our app may crash while we accept the permission, since React may try
to draw.
React Native's Gradle script does not bundle the JS bundle in the Debug
configuration. Copy that source code (and adapt it) into our sdk Gradle
script.
API level 22 is below 23 (aka Marshmallow), which included an overhaul in the
permissions system. React Native recommends 22 (it's the default when you create
a new app) and there have been reports when set higher [0] and [1].
This also fixes a critical bug, wherein Jitsi Meet wouldn't request permissions
for the camera and microphone.
Last, this change also allows us to get rid of the overlay checking code,
because it was only needed for API level 23 or higher.
[0]: https://github.com/facebook/react-native/pull/10479
[1]: https://github.com/facebook/react-native/issues/10587