NSURLConnection sendSynchronousRequest is deprecated since iOS 9. Replace the
method by whjat's currently on RN master, which implements a modern alternative.
Consolidate all failure cases into a single one: CONFERENCE_TERMINATED. If the
conference ended gracefully no error indicator will be present, otherwise there
will be.
Since the SDK may be embedded with other apps, we need to recognize our custom
URL scheme and universal links in order to tell the user if we will process the
request or not.
Make them configurable with sane defaults.
RTCAudioSession is a thin wrapper around AVAudioSession provided by the WebRTC
framework. It makes some use-cases easier, and leads us closer to manual audio
unit management, which we will likely need in the near future.
When we are in the default state (ie, not in a meeting) we shouldn't override
the AVAudioSession category and mode. It's a singleton and we might be bothering
other components of the host app which use it.
* feat(Android): implement ConnectionService
Adds basic integration with Android's ConnectionService by implementing
the outgoing call scenario.
* ref(callkit): rename _SET_CALLKIT_SUBSCRIPTIONS
* ref(callkit): move feature to call-integration directory
* feat(ConnectionService): synchronize video state
* ref(AudioMode): use ConnectionService on API >= 26
Not ready yet - few details left mentioned in the FIXMEs
* feat(ConnectionService): add debug logs
Adds logs to trace the calls.
* fix(ConnectionService): leaking ConnectionImpl instances
Turns out there is no callback fired back from the JavaScript side after
the disconnect or abort event is sent from the native. The connection
must be marked as disconnected and removed immediately.
* feat(ConnectionService): handle onCreateOutgoingConnectionFailed
* ref(ConnectionService): merge classes and move to the sdk package
* feat(CallIntegration): show Alert if outgoing call fails
* fix(ConnectionService): alternatively get call UUID from the account
Some Android flavours (or versions ?) do copy over extras to
the onCreateOutgoingConnectionFailed callback. But the call UUID is also
set as the PhoneAccount's label, so eventually it should be available
there.
* ref(ConnectionService): use call UUID as PhoneAccount ID.
The extra is not reliable on some custom Android flavours. It also makes
sense to use unique id for the account instead of the URL given that
it's created on the per call basis.
* fix(ConnectionService): abort the call when hold is requested
Turns out Android P can sometimes request HOLD even though there's no
HOLD capability added to the connection (what!?), so just abort the call
in that case.
* fix(ConnectionService): unregister account on call failure
Unregister the PhoneAccount onCreateOutgoingConnectionFailed. That's
before the ConnectionImpl instance is created which is normally
responsible for doing that.
* fix(AudioModeModule): make package private and run on the audio thread
* address other review comments
This is mostly implemented in the app, with the needed support in the SDK. Since
the app needs to donate intents and deal with creating NSUserActivity objects it
doesn't feel right to do this in a library. Instead, we donate the intents from
the app, but the SDK is ready to extract conference URLs from any intent which
was registered as a conference activity.
This also opens the door for eventually adding Handoff support.
This is done at the app level, not the SDK.
Currently 2 Firebase services are used:
- Crashlytics
- Dynamic Links
They are enabled in tandem, if the appropriate Google services file
(GoogleService-Info.plist on iOS or google-services.json on Android) is found.
Each service needs to be individually enabled in the Firebase console.
Xcode 10 introduced a new build system. Alas, it breaks a number of important
flows, such as creating an archive for the framework (ie SDK) target.
In order to "fix" this, switch back to the former (Xcode 9) build system for the
time being.
The upstream package has been unmaintained for 2 years now, and making the litle
changes needed as React Native needs them is getting old. The actual
funcionality is a couple of one-liners plus tons of boliterplate, which gets
reduced by quite a bit if we just embed it. So here it goes.
When a native iOS module implements `constantsToExport` it must define
`requiresMainQueueSetup`. In this case we don't do any UI stuff so it doesn't
need to be initialized in the main thread.
App Store Connect reported the following issues in (and rejected the binary
of) Jitsi Meet 1.18.x:
NSBluetoothPeripheralUsageDescription
NSAppleMusicUsageDescription
NSMotionUsageDescription
NSSpeechRecognitionUsageDescription
Starting spring 2019, all apps submitted to the App Store that access user
data will be required to include a purpose string for the following:
NSLocationAlwaysUsageDescription
NSLocationWhenInUseUsageDescription
Use react-native-fastimage, which uses 2 full-native image impleentations using
well known and mature (native) libraries.
This gets us rid of 2 libraries which were observerd as a source of bugs and
created trouble with dependencies: react-native-fetch-blob and
react-native-img-cache. They are also no longer well maintained.
There is no reason for them to run on the main thread, it's safe to call
AVFoundation functions on threads other than the main thread.
The previous code made an incorrect claim about the thread in which the audio
route change notification selector is called: it's called on a secondary thread:
https://developer.apple.com/documentation/avfoundation/avaudiosessionroutechangenotification
It will only be requested if a user joins a meeting or flips the switch from
video to audio and back, but never as the first thing when the welcome page is
mounted.
Fix the "mute ping pong" for once and for all. This patch takes a new approach
to the problem: it keeps track of the user generated CallKit transaction ations
and avoids calling the delegate method in those cases.
This results in a much cleaner and easier to understand handling of the flow: if
the delegate method is called it means the user tapped on the mute button. When
we sync the muted state in JS with CallKit the delegate method won't be called
at all, thus avoiding the ping-pong altogether.
In addition, make sure all CallKit methods run in the UI thread. CallKit will
call our delegate methods in the UI thread too, thsu there is no need to
synchronize access to the listener / pending action sets.
Fixes the following warning:
~~~
Module XXX requires main queue setup since it overrides `constantsToExport` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
~~~
For AppInfo and AuioMode, there is no need to initialize anything in the UI
thread, so just return NO.
* feat(recording): add sounds for when recording starts and stops
* squash: use constants, play sounds for file only
* squash: rename recordingStopped.mp3 -> recordingOff.mp3
* squash: flip var declaration for alpha order
Thsi fixed a regression in 8f75c2e279
The bundler script doesn't do anything (it literally exits right at the top)
when skipping the bundle. This is arguably wrong, because it doesn't generate
"ip.txt", the file with the bundler IP address either!
So, generate that ourselves. While ding this, also drop the need for xip.io,
which has also been removed from RN, since it gives more trouble than it solves.
With this the RN component and the consumer app can share same CallKit
provider, configuration, and enable to be part of multiple listeners of
the CallKit flow events. The main driver of this is to enable the
consumer app to be able to report an incoming call to the OS before
loading the JitsiMeetView. Once the user answers the call, the app can
instantiate a JitsiMeetView, pass the CallKit call UUIID, and the Jitsi
Meet components will handle the connection and report back to CallKit
that the call has been established.
* Button conditionally shown based on if the feature is enabled and available
* Hooks for launching the invite UI (delegates to the native layer)
* Hooks for using the search and dial out checks from the native layer (calls back into JS)
* Hooks for handling sending invites and passing any failures back to the native layer
* Android and iOS handling for those hooks
Author: Ryan Peck <rpeck@atlassian.com>
Author: Eric Brynsvold <ebrynsvold@atlassian.com>
On Android the files will be copied to the assets/sounds directory of
the SDK bundle on build time. To play the "asset:/" prefix has to be
used to locate the files correctly.
On iOS each sound file must be added to the SDK's Xcode project in order
to be bundled correctly. To playback we need to know the path of the SDK
bundle which is now exposed by the AppInfo iOS module.
Adds base/sounds feature which allows other features to register a sound
source under specified id. A new SoundsCollection component will then
render corresponding HTMLAudioElement for each such sound. Once "setRef"
callback is called by the HTMLAudioElement, this element will be added
to the Redux store. When that happens sound can be played through the
new 'playSound' action which will call play() method on the stored
HTMLAudioElement instance.
This only works automatically on Android >= 8. On other platforms / versions, it
relies on the SDK user on implementing a "reduced UI" mode and reacting to the
"request PIP" delegate method.