We are downloading code off the Internet and executing it on the user's device,
so run it sandboxed to avoid potential bad actors.
Since it's impossible to eval() safely in JS and React Native doesn't offer
something akin to Node's vm module, here we are rolling our own.
On Android it uses the Duktape JavaScript engine and on iOS the builtin
JavaScriptCore engine. The extra JS engine is *only* used for evaluating the
downloaded code and returning a JSON string which is then passed back to RN.
These provide the ability to integrate the SDK with some other application
loggers.
At the time this was written we use Timber on Android and CocoaLumberjack on iOS.
In addition to the integration capabilities, a LogBridge React Native module
provides log transports for JavaScript code, thus centralizing all logs on the
native loggers.
This commit refactors device selection (more heavily on iOS) to make it
consistent across platforms.
Due to its complexity I couldn't break out each step into separate commits,
apologies to the reviewer.
Changes made to device handling:
- speaker is always the default, regardless of the mode
- "Phone" shows as a selectable option, even in video call mode
- "Phone" is not displayed when wired headphones are present
- Shared device picker between iOS and Android
- Runtime device updates while the picker is open
It's possible a CallKit event arrives when the React Bridge has been torn down
and there is an assert that checks this. In order to avoid a crash, just skip
the event.
Replace the Swift array with an Objective-C one, since it's going to store
Objective-C objects and not Swift objects (or Swift objects which inherit from
NSObject, which is equivalent).
This avoids the need for JMCallKitEventListenerWrapper entirely, since an
NSArray can store NSObjectProtocol objects, unlike a Swift array, which prompted
the creation of the wrapper in the first place.
The SDK will now search for an asset called "CallKitIcon" on the main bundle,
and fallback to a built-in asset it it's not there, allowing SDK users to
customize it by just adding asset with that name.
Ever since we switched to handling track events instead of mute actions this has
been dead code. It was also added in the wrong place, since it's responsibility
of the JS code to solve the ping-pong problem.
NSURLConnection sendSynchronousRequest is deprecated since iOS 9. Replace the
method by whjat's currently on RN master, which implements a modern alternative.
Consolidate all failure cases into a single one: CONFERENCE_TERMINATED. If the
conference ended gracefully no error indicator will be present, otherwise there
will be.
Since the SDK may be embedded with other apps, we need to recognize our custom
URL scheme and universal links in order to tell the user if we will process the
request or not.
Make them configurable with sane defaults.
RTCAudioSession is a thin wrapper around AVAudioSession provided by the WebRTC
framework. It makes some use-cases easier, and leads us closer to manual audio
unit management, which we will likely need in the near future.