Looks like the "Firebase Analytics" dependency is needed when migrating to the
new Firebase Crashlytics SDK. We are only interested in the "latest iversion
crash-free users" stat, which seems to require this. The documentartion is
somewhat confusing though.
JSC wasn't the cause for the crash we were hunting after all. RN doesn't set
HErmes as the default, neither does Expo, so the jury is still out on Hermes,
and it looks like JSC is still the safest bet.
In addition, the way Hermes is packaged (as a standalone AARs, instead of a
local "Maven repo") complicates the SDK build and can make the resulting build
bloated.
The RN Permissions module calls this in a non-UI thread. What we observe is a
crash in ViewGroup.dispatchCancelPendingInputEvents, which is called on the
calling (ie, non-UI) thread. This doesn't look very safe, so try to avoid a
crash by pretending the permission was denied.
When exiting PiP with by pressing the X the onPictureInPictureModeChanged method
is called. Since onResume is called a while after, in case the maximize button
is called, it's not easy to know if the user pressed the X button, and that was
the cause for exiting PiP.
So, in order to avoid show the user they are still in the meeting, bring the
activity to the foregound so they can hangup.
Android for Enterprise provides special feature for applications to obtain configuration through RestrictionManager remotely by some MDM solution.
Jitsi Meet can be remotely installed and provisioned with a proper URL (making URL not editable by the user) inside the Work Profile or Fully managed device.
In the Android SDK, the setServerURL option is erroneously
ignored. The meeting's serverURL always defaults to
https://meet.jit.si because the serverURL is not parceled.
It's the source of uncountable problems for which we don't have a good
solution, since they are caused by buggy implementations of self-managed
connection services by manufacturers.
In 49e3b03885 we turned on SW encoders / decoders
on account of some devices having broken HW *encoders* and also our desire for
using simulcast.
Well, the astute reader may have noticed that only *encoding* was mentioned.
Indeed, we should be able to keep using the HW decoder just fine.
This shouldn't be needed, as ConnectionService should take care of it, but we
suspect some devices don't do it since we got reports of people not hearing
users, and the problem went away when CS was disabled.
Fallback to the non-ConnectionService case for any error. Also, handle errors
when registering the phone account; Pixel C devices throw UnsupportedException.
Some Samsung devices will fail to fully engage ConnectionService if no SIM card
was ever installed on the device. We could check for it, but it would require
the CALL_PHONE permission, which is not something we want to do, so fallback to
not using ConnectionService.
Some devices seem to have a bug in their Android versions and startCall fails
with SecurityError because the CALL_PHONE permissions is not granted. This is
not a requirement for self-managed connection services as per the official
documentation though:
https://developer.android.com/guide/topics/connectivity/telecom/selfManaged
Alas, connection services takes over audio device management too, so let's
handle the error and disable CS if we get SecurityError.
Samsung devices (of course) seem to stick with the earpiece if we first select
Bluetooth but then set speaker to false. Reverse the order to make everyone
happy.
This only applies to the generic and legacy handlers.
When ConnectionService is used (the default) we were attaching the handlers too
early, and since attaching them requires that the RNConnectionService module is
loaded, it silently failed. Instead, use the initialize() method, which gets
called after all the Catalyst (aka native) modules have been loaded.
Separate each implementation (3 as of this writing) into each own "handler"
class.
This should make the code easier to understand, maintain and extend.
We are downloading code off the Internet and executing it on the user's device,
so run it sandboxed to avoid potential bad actors.
Since it's impossible to eval() safely in JS and React Native doesn't offer
something akin to Node's vm module, here we are rolling our own.
On Android it uses the Duktape JavaScript engine and on iOS the builtin
JavaScriptCore engine. The extra JS engine is *only* used for evaluating the
downloaded code and returning a JSON string which is then passed back to RN.
If the Activity is put into the background before the ReactContext is created we
get an NPE here. While the window might be short, it's thechnically possible to
hit this, as our Crashlytics reports show.
In
b53a034aaf (diff-0339cf92cc68bc5981fe6df601316c1c)
I removed this, because RN has updated the builtin JSC version. On the next
release, however, RN introduced a new JS interpreter (Hermes) so JSC is now a RN
dependency. Thus, add the magic spells to publish the AARs to Maven.
These provide the ability to integrate the SDK with some other application
loggers.
At the time this was written we use Timber on Android and CocoaLumberjack on iOS.
In addition to the integration capabilities, a LogBridge React Native module
provides log transports for JavaScript code, thus centralizing all logs on the
native loggers.
Will emit new 'network.info' action with the online/offline status and
extra details for native like the network type and
'isConnectionExpensive' flag.