We are downloading code off the Internet and executing it on the user's device,
so run it sandboxed to avoid potential bad actors.
Since it's impossible to eval() safely in JS and React Native doesn't offer
something akin to Node's vm module, here we are rolling our own.
On Android it uses the Duktape JavaScript engine and on iOS the builtin
JavaScriptCore engine. The extra JS engine is *only* used for evaluating the
downloaded code and returning a JSON string which is then passed back to RN.
These provide the ability to integrate the SDK with some other application
loggers.
At the time this was written we use Timber on Android and CocoaLumberjack on iOS.
In addition to the integration capabilities, a LogBridge React Native module
provides log transports for JavaScript code, thus centralizing all logs on the
native loggers.
This commit refactors device selection (more heavily on iOS) to make it
consistent across platforms.
Due to its complexity I couldn't break out each step into separate commits,
apologies to the reviewer.
Changes made to device handling:
- speaker is always the default, regardless of the mode
- "Phone" shows as a selectable option, even in video call mode
- "Phone" is not displayed when wired headphones are present
- Shared device picker between iOS and Android
- Runtime device updates while the picker is open
It's possible a CallKit event arrives when the React Bridge has been torn down
and there is an assert that checks this. In order to avoid a crash, just skip
the event.
Replace the Swift array with an Objective-C one, since it's going to store
Objective-C objects and not Swift objects (or Swift objects which inherit from
NSObject, which is equivalent).
This avoids the need for JMCallKitEventListenerWrapper entirely, since an
NSArray can store NSObjectProtocol objects, unlike a Swift array, which prompted
the creation of the wrapper in the first place.
The SDK will now search for an asset called "CallKitIcon" on the main bundle,
and fallback to a built-in asset it it's not there, allowing SDK users to
customize it by just adding asset with that name.
Ever since we switched to handling track events instead of mute actions this has
been dead code. It was also added in the wrong place, since it's responsibility
of the JS code to solve the ping-pong problem.
NSURLConnection sendSynchronousRequest is deprecated since iOS 9. Replace the
method by whjat's currently on RN master, which implements a modern alternative.
Consolidate all failure cases into a single one: CONFERENCE_TERMINATED. If the
conference ended gracefully no error indicator will be present, otherwise there
will be.
Since the SDK may be embedded with other apps, we need to recognize our custom
URL scheme and universal links in order to tell the user if we will process the
request or not.
Make them configurable with sane defaults.
RTCAudioSession is a thin wrapper around AVAudioSession provided by the WebRTC
framework. It makes some use-cases easier, and leads us closer to manual audio
unit management, which we will likely need in the near future.
When we are in the default state (ie, not in a meeting) we shouldn't override
the AVAudioSession category and mode. It's a singleton and we might be bothering
other components of the host app which use it.
This is mostly implemented in the app, with the needed support in the SDK. Since
the app needs to donate intents and deal with creating NSUserActivity objects it
doesn't feel right to do this in a library. Instead, we donate the intents from
the app, but the SDK is ready to extract conference URLs from any intent which
was registered as a conference activity.
This also opens the door for eventually adding Handoff support.
The upstream package has been unmaintained for 2 years now, and making the litle
changes needed as React Native needs them is getting old. The actual
funcionality is a couple of one-liners plus tons of boliterplate, which gets
reduced by quite a bit if we just embed it. So here it goes.
When a native iOS module implements `constantsToExport` it must define
`requiresMainQueueSetup`. In this case we don't do any UI stuff so it doesn't
need to be initialized in the main thread.
There is no reason for them to run on the main thread, it's safe to call
AVFoundation functions on threads other than the main thread.
The previous code made an incorrect claim about the thread in which the audio
route change notification selector is called: it's called on a secondary thread:
https://developer.apple.com/documentation/avfoundation/avaudiosessionroutechangenotification
Fix the "mute ping pong" for once and for all. This patch takes a new approach
to the problem: it keeps track of the user generated CallKit transaction ations
and avoids calling the delegate method in those cases.
This results in a much cleaner and easier to understand handling of the flow: if
the delegate method is called it means the user tapped on the mute button. When
we sync the muted state in JS with CallKit the delegate method won't be called
at all, thus avoiding the ping-pong altogether.
In addition, make sure all CallKit methods run in the UI thread. CallKit will
call our delegate methods in the UI thread too, thsu there is no need to
synchronize access to the listener / pending action sets.
Fixes the following warning:
~~~
Module XXX requires main queue setup since it overrides `constantsToExport` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
~~~
For AppInfo and AuioMode, there is no need to initialize anything in the UI
thread, so just return NO.
With this the RN component and the consumer app can share same CallKit
provider, configuration, and enable to be part of multiple listeners of
the CallKit flow events. The main driver of this is to enable the
consumer app to be able to report an incoming call to the OS before
loading the JitsiMeetView. Once the user answers the call, the app can
instantiate a JitsiMeetView, pass the CallKit call UUIID, and the Jitsi
Meet components will handle the connection and report back to CallKit
that the call has been established.
* Button conditionally shown based on if the feature is enabled and available
* Hooks for launching the invite UI (delegates to the native layer)
* Hooks for using the search and dial out checks from the native layer (calls back into JS)
* Hooks for handling sending invites and passing any failures back to the native layer
* Android and iOS handling for those hooks
Author: Ryan Peck <rpeck@atlassian.com>
Author: Eric Brynsvold <ebrynsvold@atlassian.com>
On Android the files will be copied to the assets/sounds directory of
the SDK bundle on build time. To play the "asset:/" prefix has to be
used to locate the files correctly.
On iOS each sound file must be added to the SDK's Xcode project in order
to be bundled correctly. To playback we need to know the path of the SDK
bundle which is now exposed by the AppInfo iOS module.
This only works automatically on Android >= 8. On other platforms / versions, it
relies on the SDK user on implementing a "reduced UI" mode and reacting to the
"request PIP" delegate method.