Time has come. We need to enable bitcode. It's optional for iOS targets, but
mandatory for the entire project if there is a watchOS target. Since we have a
watchOS target, it's time to enable it.
Replace the Swift array with an Objective-C one, since it's going to store
Objective-C objects and not Swift objects (or Swift objects which inherit from
NSObject, which is equivalent).
This avoids the need for JMCallKitEventListenerWrapper entirely, since an
NSArray can store NSObjectProtocol objects, unlike a Swift array, which prompted
the creation of the wrapper in the first place.
The SDK will now search for an asset called "CallKitIcon" on the main bundle,
and fallback to a built-in asset it it's not there, allowing SDK users to
customize it by just adding asset with that name.
Ever since we switched to handling track events instead of mute actions this has
been dead code. It was also added in the wrong place, since it's responsibility
of the JS code to solve the ping-pong problem.
NSURLConnection sendSynchronousRequest is deprecated since iOS 9. Replace the
method by whjat's currently on RN master, which implements a modern alternative.
Consolidate all failure cases into a single one: CONFERENCE_TERMINATED. If the
conference ended gracefully no error indicator will be present, otherwise there
will be.
Since the SDK may be embedded with other apps, we need to recognize our custom
URL scheme and universal links in order to tell the user if we will process the
request or not.
Make them configurable with sane defaults.
RTCAudioSession is a thin wrapper around AVAudioSession provided by the WebRTC
framework. It makes some use-cases easier, and leads us closer to manual audio
unit management, which we will likely need in the near future.
When we are in the default state (ie, not in a meeting) we shouldn't override
the AVAudioSession category and mode. It's a singleton and we might be bothering
other components of the host app which use it.
* feat(Android): implement ConnectionService
Adds basic integration with Android's ConnectionService by implementing
the outgoing call scenario.
* ref(callkit): rename _SET_CALLKIT_SUBSCRIPTIONS
* ref(callkit): move feature to call-integration directory
* feat(ConnectionService): synchronize video state
* ref(AudioMode): use ConnectionService on API >= 26
Not ready yet - few details left mentioned in the FIXMEs
* feat(ConnectionService): add debug logs
Adds logs to trace the calls.
* fix(ConnectionService): leaking ConnectionImpl instances
Turns out there is no callback fired back from the JavaScript side after
the disconnect or abort event is sent from the native. The connection
must be marked as disconnected and removed immediately.
* feat(ConnectionService): handle onCreateOutgoingConnectionFailed
* ref(ConnectionService): merge classes and move to the sdk package
* feat(CallIntegration): show Alert if outgoing call fails
* fix(ConnectionService): alternatively get call UUID from the account
Some Android flavours (or versions ?) do copy over extras to
the onCreateOutgoingConnectionFailed callback. But the call UUID is also
set as the PhoneAccount's label, so eventually it should be available
there.
* ref(ConnectionService): use call UUID as PhoneAccount ID.
The extra is not reliable on some custom Android flavours. It also makes
sense to use unique id for the account instead of the URL given that
it's created on the per call basis.
* fix(ConnectionService): abort the call when hold is requested
Turns out Android P can sometimes request HOLD even though there's no
HOLD capability added to the connection (what!?), so just abort the call
in that case.
* fix(ConnectionService): unregister account on call failure
Unregister the PhoneAccount onCreateOutgoingConnectionFailed. That's
before the ConnectionImpl instance is created which is normally
responsible for doing that.
* fix(AudioModeModule): make package private and run on the audio thread
* address other review comments
This is mostly implemented in the app, with the needed support in the SDK. Since
the app needs to donate intents and deal with creating NSUserActivity objects it
doesn't feel right to do this in a library. Instead, we donate the intents from
the app, but the SDK is ready to extract conference URLs from any intent which
was registered as a conference activity.
This also opens the door for eventually adding Handoff support.
The upstream package has been unmaintained for 2 years now, and making the litle
changes needed as React Native needs them is getting old. The actual
funcionality is a couple of one-liners plus tons of boliterplate, which gets
reduced by quite a bit if we just embed it. So here it goes.
When a native iOS module implements `constantsToExport` it must define
`requiresMainQueueSetup`. In this case we don't do any UI stuff so it doesn't
need to be initialized in the main thread.
There is no reason for them to run on the main thread, it's safe to call
AVFoundation functions on threads other than the main thread.
The previous code made an incorrect claim about the thread in which the audio
route change notification selector is called: it's called on a secondary thread:
https://developer.apple.com/documentation/avfoundation/avaudiosessionroutechangenotification
Fix the "mute ping pong" for once and for all. This patch takes a new approach
to the problem: it keeps track of the user generated CallKit transaction ations
and avoids calling the delegate method in those cases.
This results in a much cleaner and easier to understand handling of the flow: if
the delegate method is called it means the user tapped on the mute button. When
we sync the muted state in JS with CallKit the delegate method won't be called
at all, thus avoiding the ping-pong altogether.
In addition, make sure all CallKit methods run in the UI thread. CallKit will
call our delegate methods in the UI thread too, thsu there is no need to
synchronize access to the listener / pending action sets.
Fixes the following warning:
~~~
Module XXX requires main queue setup since it overrides `constantsToExport` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
~~~
For AppInfo and AuioMode, there is no need to initialize anything in the UI
thread, so just return NO.
* feat(recording): add sounds for when recording starts and stops
* squash: use constants, play sounds for file only
* squash: rename recordingStopped.mp3 -> recordingOff.mp3
* squash: flip var declaration for alpha order
Thsi fixed a regression in 8f75c2e279
The bundler script doesn't do anything (it literally exits right at the top)
when skipping the bundle. This is arguably wrong, because it doesn't generate
"ip.txt", the file with the bundler IP address either!
So, generate that ourselves. While ding this, also drop the need for xip.io,
which has also been removed from RN, since it gives more trouble than it solves.
With this the RN component and the consumer app can share same CallKit
provider, configuration, and enable to be part of multiple listeners of
the CallKit flow events. The main driver of this is to enable the
consumer app to be able to report an incoming call to the OS before
loading the JitsiMeetView. Once the user answers the call, the app can
instantiate a JitsiMeetView, pass the CallKit call UUIID, and the Jitsi
Meet components will handle the connection and report back to CallKit
that the call has been established.
* Button conditionally shown based on if the feature is enabled and available
* Hooks for launching the invite UI (delegates to the native layer)
* Hooks for using the search and dial out checks from the native layer (calls back into JS)
* Hooks for handling sending invites and passing any failures back to the native layer
* Android and iOS handling for those hooks
Author: Ryan Peck <rpeck@atlassian.com>
Author: Eric Brynsvold <ebrynsvold@atlassian.com>
On Android the files will be copied to the assets/sounds directory of
the SDK bundle on build time. To play the "asset:/" prefix has to be
used to locate the files correctly.
On iOS each sound file must be added to the SDK's Xcode project in order
to be bundled correctly. To playback we need to know the path of the SDK
bundle which is now exposed by the AppInfo iOS module.
This only works automatically on Android >= 8. On other platforms / versions, it
relies on the SDK user on implementing a "reduced UI" mode and reacting to the
"request PIP" delegate method.
Story time. Currently the app can be started in 4 ways:
- just tapping on the icon
- via a deep link
- via a universal link
- via the phone's recent calls list
The last 3 options will make the app join the specified room upon launch. React
Native's Linking module implements the necessary bits to handle deep or
universal linking, but CallKit is out of its scope.
In order to blend any type of app startup mode, a new LaunchOptions module (iOS
only) exports a getInitialURL function, akin to the one in the Linking module,
but taking CallKit instents into consideration. This function is then used to
make app startup with a URL consistent across all different modes.
Due to the difference in nature, the iOS and Android implementations are
completely different:
iOS: MPVolumeView is used, which allows us to place a button which will launch a
native route picker provided by iOS itself. This view is different depending on
the iOS version, with the iOS 11 version being more complete.
Android: A completely custom component is used, which displays a bottom sheet
with the device categories, not devices individually. This is akin to the sheet
in the builtin dialer.
Revert "[RN] Remove unnecessary source code" (commit
a3441030a3). But since the project file
needs to explicitly mention the CallKit and Intents framework, do not
use the semantic @import as that's confusing in the case.
* Javadoc introduced @code as a replacement of <code> and <tt> which is
better aligned with other javadoc tags such as @link. Use it in the
Java source code. If we switch to Kotlin, then we'll definitely use
Markdown.
* There are more uses of @code in the JavaScript source code than <tt>
so use @code for the sake of consistency. Eventually, I'd rather we
switch to Markdown because it's easier on my eyes.
* Xcode is plain confused by @code and @link. The Internet says that
Xcode supports the backquote character to denote the beginning and end
of a string of characters which should be formatted for display as
code but it doesn't work for me. <tt> is not rendered at all. So use
the backquote which is rendered itself. Hopefully, if we switch to
Markdown, then it'll be common between JavaScript and Objective-C
source code.
This commit adds initial support for CallKit on supported platforms: iOS >= 10.
Since the call flow in Jitsi Meet is basically making outgoing calls, only
outgoing call support is currently handled via CallKit.
Features:
- "Green bar" when in a call.
- Native CallKit view when tapping on the call label on the lock screen.
- Support for audio muting from the native CallKit view.
- Support for recent calls (audio-only calls logged as Audio calls, others show
as Video calls).
- Call display name is room name.
- Graceful downgrade on systems without CallKit support.
Limitations:
- Native CallKit view cannot be shown for audio-only calls (this is a CallKit
limitaion).
- The video button in the CallKit view will start a new video call to the same
room, and terminate the previous one.
- No support for call hold.
The error is stored in the redux store in base/config so other components can
consult it. It is also broadcasted as a new event in the external API for the
SDK.
JitsiMeetViewListener currently has methods of one and the same pattern
so adding new methods i.e. events i.e. redux action types is a question
of repetition in the Java source code. Speed up the support of new
events by trying to deal with them in a generic way.
The same goes for JitsiMeetViewDelegate.
Deep/universal linking now utilizes loadURL (when possible). But loadURL
is imperative in the native source code while its JavaScript counterpart
i.e. React App Component prop url is declarative. So there's the
following bug: open a URL, leave the conference (by tapping the hangup
button, for example), and then opening the same URL actually leaves you
on the Welcome page (if enabled; otherwise, a black screen).
The implementation has a flow though: opening the same URL twice in a
row without an intervening leave will leave the first opening and join
the new opening. Which can be improved by not leaving and joining if the
conference is joined, joining, an not leaving. But that can be done
separately as an improvement independent of the current implementation
details.
Introduces loadURLObject in JitsiMeetView on Android and iOS which
accepts a Bundle and NSDictionary, respectively, similar in structure to
the JS object accepted by the constructor of Web's ExternalAPI. At this
time, only the property url of the bundle/dictionary is supported.
However, it allows the public API of loadURLObject to be consumed. The
property url will be made optional in the future and other properties
will be supported from which a URL will be constructed.
Initializing a new URL/NSURL instance is a chore especially when one
takes into account that the JavaScript side (1) is loading the URL
asynchronously and (2) is capable of parsing strings that may or may not
be represented as URL/NSURL.
The Android method loadURLString(String) may have been called
loadURL(String) to overload loadURL(URL) but I didn't want to do that
because:
1. It would not be compatible with existing source code such as
loadURL(null) which would have become ambiguous.
2. I wanted to achieve better convergence with the iOS API.
On iOS, if the app is closed the startup options are only passed as the
`launchOptions` dictionary of `applicationDidFinishLaunching`. Thus add a helper
method to be called from there by embedding applications so we can copy that
dictionary.
Before, Jitsi Meet (the app) would only link with JitsiMeet.framework, which in
turn embedded WebRTC.framework. While possible, Apple doesn't allow apps with
nested frameworks to be submitted to the store. Now the app will link with
WebRTC.framework directly so there is no framework nesting.
A potential improvement here is to build WebRTC as a static library so it can
then be embedded in JitsiMeet.framework and completely hidden from the app.
The current implementation doesn't use the API and Transport modules. This is
due to the fact that they are too tied to APP at the moment, which is web only.
Once API is refactored and moved into the Redux store this will be adjusted,
though it's unlikely that the lowest level React Native module (ExternalAPI)
changes drastically.
This commit also introduces a stopgap limitation of only allowing a single
instance for JitsiMeetView objects on both Android and iOS. React Native doesn't
really play well with having multiple instances of the same modules on the same
bridge, since they behave a bit like singletons. Even if we were to use multiple
bridges, some features depend on system-level global state, such as the
AVAudioSession mode or Android's immersive mode. Further attempts will be made
at lifting this limitation in the future, though.
1. Aligns the project structure of Jitsi Meet SDK for iOS with that for
Android for better comprehension.
2. The command `react-native run-ios` uses the last Xcode project or
workspace in the list of these sorted in alphabetical order. Which
limits our freedom in naming. Thus having only an Xcode project in
the root directory of the iOS project structure gives us back the
freedom in naming.
3. Allows the Podspec to work for the app project in addition to the sdk
project because we need Crashlytics in the app which is integrated
via Cocoapods as well.
4. Further removes references to JitsiKit in the source code for the
sake of consistent naming.
Ladies and gentlemen, allow me to introduce you to Jitsi Meet SDK for iOS, the
mobile SDK which powers Jitsi Meet.
The goal is to encapsulate the entire React Native app into a framework / SDK
and offer an API for native (ObjC or Swift) applications to embed the Jitsi
conferencing experience.
While React Native can be embedded in native applications, I don't think it was
designed to be embedded as part of a framework, hidden away from the application
using it. This surfaced as a number of issues which had to be addressed
specifically due to our use-case:
- Universal / deep linking needed to be wrapped to avoid the embedding app from
linking with RN.
- The bundle URL had to be manually constructed, since RN considers that all
resources are in the main bundle, but in case of a framework that is not the
case.
- Custom fonts had to be manually loaded, since UIAppFonts doesn't work on the
framework's Info.plist file.
- The RN packager has to be manually triggered since the React project will no
longer do it for us.
- Custom App Transport Security rules were added since the builtin way to do it
modifies the framework's Info.plist, which is useless in this case.
At this stage, the Jitsi Meet application is just a small single view
application which uses the Jitsi Meet SDK to create a single view which
represents the entire application. Events and external conference handling are
forthcoming.