Looks like audio devices must be re-set after focus was lost and regained.
Otherwise some devices (tested on a Samsung Galaxy S9) are in a weird state
where the second microphone is not used when speakerphone is on.
Looks like the "Firebase Analytics" dependency is needed when migrating to the
new Firebase Crashlytics SDK. We are only interested in the "latest iversion
crash-free users" stat, which seems to require this. The documentartion is
somewhat confusing though.
JSC wasn't the cause for the crash we were hunting after all. RN doesn't set
HErmes as the default, neither does Expo, so the jury is still out on Hermes,
and it looks like JSC is still the safest bet.
In addition, the way Hermes is packaged (as a standalone AARs, instead of a
local "Maven repo") complicates the SDK build and can make the resulting build
bloated.
The RN Permissions module calls this in a non-UI thread. What we observe is a
crash in ViewGroup.dispatchCancelPendingInputEvents, which is called on the
calling (ie, non-UI) thread. This doesn't look very safe, so try to avoid a
crash by pretending the permission was denied.
When exiting PiP with by pressing the X the onPictureInPictureModeChanged method
is called. Since onResume is called a while after, in case the maximize button
is called, it's not easy to know if the user pressed the X button, and that was
the cause for exiting PiP.
So, in order to avoid show the user they are still in the meeting, bring the
activity to the foregound so they can hangup.
Android for Enterprise provides special feature for applications to obtain configuration through RestrictionManager remotely by some MDM solution.
Jitsi Meet can be remotely installed and provisioned with a proper URL (making URL not editable by the user) inside the Work Profile or Fully managed device.
In the Android SDK, the setServerURL option is erroneously
ignored. The meeting's serverURL always defaults to
https://meet.jit.si because the serverURL is not parceled.
It's the source of uncountable problems for which we don't have a good
solution, since they are caused by buggy implementations of self-managed
connection services by manufacturers.
In 49e3b03885 we turned on SW encoders / decoders
on account of some devices having broken HW *encoders* and also our desire for
using simulcast.
Well, the astute reader may have noticed that only *encoding* was mentioned.
Indeed, we should be able to keep using the HW decoder just fine.
This shouldn't be needed, as ConnectionService should take care of it, but we
suspect some devices don't do it since we got reports of people not hearing
users, and the problem went away when CS was disabled.
Fallback to the non-ConnectionService case for any error. Also, handle errors
when registering the phone account; Pixel C devices throw UnsupportedException.
Some Samsung devices will fail to fully engage ConnectionService if no SIM card
was ever installed on the device. We could check for it, but it would require
the CALL_PHONE permission, which is not something we want to do, so fallback to
not using ConnectionService.
Some devices seem to have a bug in their Android versions and startCall fails
with SecurityError because the CALL_PHONE permissions is not granted. This is
not a requirement for self-managed connection services as per the official
documentation though:
https://developer.android.com/guide/topics/connectivity/telecom/selfManaged
Alas, connection services takes over audio device management too, so let's
handle the error and disable CS if we get SecurityError.
Samsung devices (of course) seem to stick with the earpiece if we first select
Bluetooth but then set speaker to false. Reverse the order to make everyone
happy.
This only applies to the generic and legacy handlers.
When ConnectionService is used (the default) we were attaching the handlers too
early, and since attaching them requires that the RNConnectionService module is
loaded, it silently failed. Instead, use the initialize() method, which gets
called after all the Catalyst (aka native) modules have been loaded.
Separate each implementation (3 as of this writing) into each own "handler"
class.
This should make the code easier to understand, maintain and extend.
We are downloading code off the Internet and executing it on the user's device,
so run it sandboxed to avoid potential bad actors.
Since it's impossible to eval() safely in JS and React Native doesn't offer
something akin to Node's vm module, here we are rolling our own.
On Android it uses the Duktape JavaScript engine and on iOS the builtin
JavaScriptCore engine. The extra JS engine is *only* used for evaluating the
downloaded code and returning a JSON string which is then passed back to RN.
If the Activity is put into the background before the ReactContext is created we
get an NPE here. While the window might be short, it's thechnically possible to
hit this, as our Crashlytics reports show.
In
b53a034aaf (diff-0339cf92cc68bc5981fe6df601316c1c)
I removed this, because RN has updated the builtin JSC version. On the next
release, however, RN introduced a new JS interpreter (Hermes) so JSC is now a RN
dependency. Thus, add the magic spells to publish the AARs to Maven.
These provide the ability to integrate the SDK with some other application
loggers.
At the time this was written we use Timber on Android and CocoaLumberjack on iOS.
In addition to the integration capabilities, a LogBridge React Native module
provides log transports for JavaScript code, thus centralizing all logs on the
native loggers.
Will emit new 'network.info' action with the online/offline status and
extra details for native like the network type and
'isConnectionExpensive' flag.
This commit refactors device selection (more heavily on iOS) to make it
consistent across platforms.
Due to its complexity I couldn't break out each step into separate commits,
apologies to the reviewer.
Changes made to device handling:
- speaker is always the default, regardless of the mode
- "Phone" shows as a selectable option, even in video call mode
- "Phone" is not displayed when wired headphones are present
- Shared device picker between iOS and Android
- Runtime device updates while the picker is open
Set our own audio device manager so we can tweak it if need be (enabling /
disabling the HW AEC on specific devices).
Switch to using the software video encoder / decoder. This may feel like a
downgrade, but it has advantages:
- simulcast is now working (on par with iOS)
- certain devices have broken VP8 HW encoders (I'm looking at you Samsung Galaxy
S7) so this fixes that
After calling startService we are supposed to have a bit of time before turning
the service into a foreground service, but certain devices seem to be more
spartan and we've seen the following failure:
Caused by java.lang.IllegalStateException: Not allowed to start service Intent { act=JitsiMeetOngoingConferenceService:START cmp=org.jitsi.meet/.sdk.JitsiMeetOngoingConferenceService }: app is in background uid UidRecord{f6778d5 u0a220 CAC bg:+1m1s417ms idle change:idle procs:1 proclist:15604, seq(0,0,0)}
at android.app.ContextImpl.startServiceCommon + 1600(ContextImpl.java:1600)
at android.app.ContextImpl.startService + 1546(ContextImpl.java:1546)
at android.content.ContextWrapper.startService + 669(ContextWrapper.java:669)
at org.jitsi.meet.sdk.JitsiMeetOngoingConferenceService.launch + 50(JitsiMeetOngoingConferenceService.java:50)
Be expliocit and call startForegroundService, on supported platforms.
The app is about to crash at that stage so it was a moot point to try to leave
the conference anyway.
Stopping ConnectionServers is still a good idea though, since a crash may leave
the device in a bad state otherwise.
Fixes this issue:
~~~
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1441)
at java.util.HashMap$KeyIterator.next(HashMap.java:1465)
at org.jitsi.meet.sdk.OngoingConferenceTracker.updateListeners(OngoingConferenceTracker.java:89)
at org.jitsi.meet.sdk.OngoingConferenceTracker.onExternalAPIEvent(OngoingConferenceTracker.java:74)
at org.jitsi.meet.sdk.ExternalAPIModule.sendEvent(ExternalAPIModule.java:71)
at java.lang.reflect.Method.invoke(Native Method)
at com.facebook.react.bridge.JavaMethodWrapper.invoke(JavaMethodWrapper.java:372)
at com.facebook.react.bridge.JavaModuleWrapper.invoke(JavaModuleWrapper.java:158)
at com.facebook.react.bridge.queue.NativeRunnable.run(Native Method)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:29)
at android.os.Looper.loop(Looper.java:214)
at com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run(MessageQueueThreadImpl.java:232)
at java.lang.Thread.run(Thread.java:764)
~~~
This helper method gets the current Activity attached to React Native (via the
ReactContext). This is useful for modules which need access to it, without being
actual React Native modules.
Its main task is to cleanup conferences (specially the connection services
stuff) to make sure the system is left in a working state even when the
unexpected happens.
Entering PiP mode while the permissions dialog is display will not only
fail, but also mess up the Activity lifecycle on some OS versions.
We may end up with two activity/fragment instances and a situation where
the onStop callback was not called yet on the instance #1 while
the onResume has been already called on instance #2.