Ever since https://github.com/facebook/react-native/pull/23674 landed it has
been possible to run timers in the background, assuming your app is allowed to
run in the background already, as is our case. So, stop using the library on
iOS, which will avoid creatring needless backgound tasks.
When you join a conference that needs an authenticated moderator, as a guest, Jitsi Meet will continuously try and connect to the meeting every 5 seconds. Avoid starting the native call integration more than once.
Fixes: https://github.com/jitsi/jitsi-meet/issues/6260
Up until now we relied on implicit loading of middlewares and reducers, through
having imports in each feature's index.js.
This leads to many complex import cycles which result in (sometimes) hard to fix
bugs in addition to (often) breaking mobile because a web-only feature gets
imported on mobile too, thanks to the implicit loading.
This PR changes that to make the process explicit. Both middlewares and reducers
are imported in a single place, the app entrypoint. They have been divided into
3 categories: any, web and native, which represent each of the platforms
respectively.
Ideally no feature should have an index.js exporting actions, action types and
components, but that's a larger ordeal, so this is just the first step in
getting there. In order to both set example and avoid large cycles the app
feature has been refactored to not have an idex.js itself.
Move all polyfills to a standalone feature, which gets imported before anything
else in the mobile entrypoint. This guarantees that any further import sees the
polyfilled environment.
Fallback to the non-ConnectionService case for any error. Also, handle errors
when registering the phone account; Pixel C devices throw UnsupportedException.
Some Samsung devices will fail to fully engage ConnectionService if no SIM card
was ever installed on the device. We could check for it, but it would require
the CALL_PHONE permission, which is not something we want to do, so fallback to
not using ConnectionService.
Some devices seem to have a bug in their Android versions and startCall fails
with SecurityError because the CALL_PHONE permissions is not granted. This is
not a requirement for self-managed connection services as per the official
documentation though:
https://developer.android.com/guide/topics/connectivity/telecom/selfManaged
Alas, connection services takes over audio device management too, so let's
handle the error and disable CS if we get SecurityError.
- Remove network-activity "feature"
- It wasn't in use
- It relied on internal React Native components, bound to break anytime
- Show an infinite loading indicator
- Style it just like the LoadConfigOverlay
- Since it kinda represents the opposite, an "unload" then SDK is done
In iOS 13 if the call is not unmuted when we report it to the system as started,
an action to unmute it is dispatched automagically. Thanks, Apple.
So, delay synchronizing the muted state until the conference is started (after
the join action). This creates a small window for de-synchronization, but it's
very short and it seems unavoidable.
This change is only applied to operating systems built by the fruit company in
Cupertino.
React Native doesn't define __filename nor __dirname so do it artisanally. In
addition, this helps with centralizing the configuration passed to loggers.
This commit refactors device selection (more heavily on iOS) to make it
consistent across platforms.
Due to its complexity I couldn't break out each step into separate commits,
apologies to the reviewer.
Changes made to device handling:
- speaker is always the default, regardless of the mode
- "Phone" shows as a selectable option, even in video call mode
- "Phone" is not displayed when wired headphones are present
- Shared device picker between iOS and Android
- Runtime device updates while the picker is open
This refactors all handling of audio-only and last N to 2 features in preparation
for "low bandwidth mode".
The main motivation to do this is that lastN is a "global" setting so it helps
to have all processing for it in a single place.
Moves getCurrentConferenceUrl method to base/connection to allow reuse.
The new location is not ideal, but looks the best based on the imports
required (trying to avoid circular dependencies).
Dear reader, I'm not proud at all of what you are about to read, but sometimes
life just gives you lemons, so enjoy some lemonade!
Joining a conference implies first creating the XMPP connection and then joining
the MUC. It's very possible the XMPP connection was made but there was no chance
for the conference to be created.
This patch fixes this case by artificially genrating a conference terminated
event in such case. In order to have all the necessary knowledge for this event
to be sent the connection now keeps track of the conference that runs it.
In addition, there is an even more obscure corner case: it's not impossible to
try to disconnect when there is not even a connection. This was fixed by
creating a fake disconnect event. Alas the location URL is lost at this point,
but it's better than nothing I guess.
Using anything non-serializable for action types is discouraged:
https://redux.js.org/faq/actions#actions
In fact, this is the Flow definition for dispatching actions:
declare export type DispatchAPI<A> = (action: A) => A;
declare export type Dispatch<A: { type: $Subtype<string> }> = DispatchAPI<A>;
Note how the `type` field is defined as a subtype of string, which Symbol isn’t.
Consolidate all failure cases into a single one: CONFERENCE_TERMINATED. If the
conference ended gracefully no error indicator will be present, otherwise there
will be.
The conference disconnection process is asynchronous which means there's
no guarantee that there will be CONFERENCE_LEFT event for the old
conference, before the next conference is joined. Because of that we can
end up with two simultaneous calls on the native side which is not
always supported. End the call on CONFERENCE_WILL_LEAVE to fix this
corner case.
* feat(Android): implement ConnectionService
Adds basic integration with Android's ConnectionService by implementing
the outgoing call scenario.
* ref(callkit): rename _SET_CALLKIT_SUBSCRIPTIONS
* ref(callkit): move feature to call-integration directory
* feat(ConnectionService): synchronize video state
* ref(AudioMode): use ConnectionService on API >= 26
Not ready yet - few details left mentioned in the FIXMEs
* feat(ConnectionService): add debug logs
Adds logs to trace the calls.
* fix(ConnectionService): leaking ConnectionImpl instances
Turns out there is no callback fired back from the JavaScript side after
the disconnect or abort event is sent from the native. The connection
must be marked as disconnected and removed immediately.
* feat(ConnectionService): handle onCreateOutgoingConnectionFailed
* ref(ConnectionService): merge classes and move to the sdk package
* feat(CallIntegration): show Alert if outgoing call fails
* fix(ConnectionService): alternatively get call UUID from the account
Some Android flavours (or versions ?) do copy over extras to
the onCreateOutgoingConnectionFailed callback. But the call UUID is also
set as the PhoneAccount's label, so eventually it should be available
there.
* ref(ConnectionService): use call UUID as PhoneAccount ID.
The extra is not reliable on some custom Android flavours. It also makes
sense to use unique id for the account instead of the URL given that
it's created on the per call basis.
* fix(ConnectionService): abort the call when hold is requested
Turns out Android P can sometimes request HOLD even though there's no
HOLD capability added to the connection (what!?), so just abort the call
in that case.
* fix(ConnectionService): unregister account on call failure
Unregister the PhoneAccount onCreateOutgoingConnectionFailed. That's
before the ConnectionImpl instance is created which is normally
responsible for doing that.
* fix(AudioModeModule): make package private and run on the audio thread
* address other review comments
componentWillMount is a deprecated lifecycle method;
componentDidMount should be used to kick off things
like ajax. In the case of the _App hierarchy, a promise
chain is used to perform initialization, and it is
first started in the constructor by initializing
storage. However, by the time storage is initialized,
resolving the first promise, _App has already mounted.
So, move it all to the componentDidMount lifecycle.
For the most part the changes are taking the "static propTypes" declaration off
of components and declaring them as Flow types. Sometimes to support flow some
method signatures had to be added. There are some exceptions in which more had
to be done to tame the beast:
- AbstractVideoTrack: put in additional truthy checks for videoTrack.
- Video: add truthy checks for the _videoElement ref.
- shouldRenderVideoTrack function: Some component could pass null for the
videoTrack argument and Flow wanted that called out explicitly.
- DisplayName: Add a truthy check for the input ref before acting on it.
- NumbersList: Move array checks inline for Flow to comprehend array methods
could be called. Add type checks in the Object.entries loop as the value is
assumed to be a mixed type by Flow.
- AbstractToolbarButton: add additional truthy check for passed in type.
Use react-native-fastimage, which uses 2 full-native image impleentations using
well known and mature (native) libraries.
This gets us rid of 2 libraries which were observerd as a source of bugs and
created trouble with dependencies: react-native-fetch-blob and
react-native-img-cache. They are also no longer well maintained.
It's a separate view (on the native side) and app (on the JavaScript side) so
applications can use it independently.
Co-authored-by: Shuai Li <sli@atlassian.com>
Co-authored-by: Pawel Domas <pawel.domas@jitsi.org>
BaseApp does all the heavy-lifting related to creating the redux store,
navigation, and so on.
App currently handles URL props and actually triggering navigation based on
them.
Fix the "mute ping pong" for once and for all. This patch takes a new approach
to the problem: it keeps track of the user generated CallKit transaction ations
and avoids calling the delegate method in those cases.
This results in a much cleaner and easier to understand handling of the flow: if
the delegate method is called it means the user tapped on the mute button. When
we sync the muted state in JS with CallKit the delegate method won't be called
at all, thus avoiding the ping-pong altogether.
In addition, make sure all CallKit methods run in the UI thread. CallKit will
call our delegate methods in the UI thread too, thsu there is no need to
synchronize access to the listener / pending action sets.
The change to mobile/external-api is required to not emit
CONFERENCE_FAILED for CONNECTION_FAILED if the conference has been
started, because base/conference state will still hold conference
instances which are to be ended by other means and result in the
appropriate event (which will adjust the base/conference state).
Your truly introduced this regression in
8c7a3f16b1, alas.
The audio only mode is used to set the CallKit call type. This affects the
behavior on the recent calls entries (calls are marked either as audio or video
calls).
Sync both at the start and for transitions. The previous code was working by
chance (in a way): when the CallKit UI is presented the local video is muted,
which triggers a SET_VIDEO_MUTED action, at which point the audio-only mode was
checked for. Now we are more explicit and act on SET_AUDIO_MUTED.
Read the muted state from the track itself instead of from base/media. This
avoid expressing the incorrect desire when the call starts muted because
permission was never granted.
Makes sure that whenever a conference is left or switched, the local
participant's id will be equal to the default value.
The problem fixed by this commit is a situation where the local
participant may end up sharing the same ID with it's "ghost" when
rejoining a disconnected conference. The most important and easiest to
hit case is when the conference is left after the CONFERENCE_FAILED
event.
Another rare and harder to encounter in the real world issue is
where CONFERENCE_LEFT may come with the delay due to it's asynchronous
nature. The step by step scenario is as follows: trying to leave a
conference, but the network is not doing well, so it takes time,
requests are timing out. After getting back to the welcome page the
the CONFERENCE_LEFT has not arrived yet. The same conference is joined
again and the load config may timeout, but it will be read from the
cache. Now the network gets better and conference is joining which
results in our ghost participant added to the redux state. At this point
there's the root issue: two participants with the same id, because the
local one was neither cleared nor set to the new one yet
(PARTICIPANT_JOINED come, before CONFERENCE_JOINED where we adjust the
id). Then comes CONFERENCE_JOINED and we try to update our local id.
We're updating the ID of both ghost and local participant. It could be
also that the delayed CONFERENCE_LEFT comes for the old conference, but
it's too late and it would update the id for both participants.
The approach here reasons that the ID of the local participant
may be reset as soon as the local participant and, respectively, her ID
is no longer involved in a recoverable JitsiConference of interest to
the user and, consequently, the app.
Co-authored-by: Pawel Domas <pawel.domas@jitsi.org>
Co-authored-by: Lyubo Marinov <lmarinov@atlassian.com>
Add ability to provide a display name in the configOverwrite object that
when available it will be used to customize the name of the meeting in
callkit screen and recent call list.
Co-authored-by: Daniel Ornelas <daniel.ob64@gmail.com>
Co-authored-by: Lyubo Marinov <lmarinov@atlassian.com>
With some of the preceding commits in the "multiplying remote
thumbnails" story line, I started hitting this error with 100%
reproducibility:
1. Have a remote participant prepared in conferenceA. Web will do as
well.
2. On iOS prepare to join conferenceB in Safari and use the same device
for step 3.
3. Join conferenceA on the iOS device from step 2 with audio-only. The
audio-only is so that avatars are always visible. Wait for the remote
participant prepared in step 1 to appear.
4. Switch to Safari and hit "Continue in the app" to have the app leave
conferenceA and join conferenceB.
What happens:
After the iOS device joins conferenceB in the Jitsi Meet app, the local
participant is on the large video (as expected) but the avatar of the
local participant is the default audo-generated auto-colored
placeholder. That's because this error was hit and the avatar couldn't
be "fetched".
Contributing all buttons in one place goes against the designs that we
set out at the beginning of the project's rewrite and that multiple of
us have been following since then.
If multiple JitsiMeetView instances are created (not necessarily
existing at once), it's possible to hit a TypeError when reading the
React Component props of the currently mounted App. Anyway, in certain
places we're already protecting against that out of abundance of caution
so it makes no sense to not protect everywhere.
Make it more generic by accepting any content except of just rows with text and
icons.
In addition, rework its structure so the animation is smoother, by putting the
background overlay outside of the Modal. This way, the animation doesn't affect
the background, which won't slide down.
It seems that the external API will not send any event to let the sdk
consumer know that the conference has failed if the problem occurs at
the establishing of XMPP connection stage. That's because the config was
loaded successfully, but the conference instance does not exist yet, so
neither base/config nor base/conference will emit any failure.
Make the external API emit CONFERENCE_WILL_JOIN early on SET_ROOM action
which occurs before the XMPP connection is created. At this point we
know that config has loaded and if there's a valid conference room to
be joined. We were thinking of doing that even on CONFIG_WILL_LOAD,
but that seemed to be to risky at this point.
With this the RN component and the consumer app can share same CallKit
provider, configuration, and enable to be part of multiple listeners of
the CallKit flow events. The main driver of this is to enable the
consumer app to be able to report an incoming call to the OS before
loading the JitsiMeetView. Once the user answers the call, the app can
instantiate a JitsiMeetView, pass the CallKit call UUIID, and the Jitsi
Meet components will handle the connection and report back to CallKit
that the call has been established.
Currently enterPictureInPicture action can only be dispatched when
the app is on the conference view and the enter PiP button is displayed,
so no check should be necessary.
Activity.enterPictureInPictureMode method must be invoked synchronously
on userLeaveHint callback in order to be sure that the current Activity
is still visible (does not transit to PAUSED state). Previously if the
asynchronous processing would be delayed enough for the Activity to go
into the PAUSED state it will be too late to go into the PiP mode.
* Button conditionally shown based on if the feature is enabled and available
* Hooks for launching the invite UI (delegates to the native layer)
* Hooks for using the search and dial out checks from the native layer (calls back into JS)
* Hooks for handling sending invites and passing any failures back to the native layer
* Android and iOS handling for those hooks
Author: Ryan Peck <rpeck@atlassian.com>
Author: Eric Brynsvold <ebrynsvold@atlassian.com>
Activity.enterPictureInPictureMode can fail for a couple of reasons
mentioned in the JSDoc:
"The system may disallow entering picture-in-picture in various cases,
including when the activity is not visible, if the screen is locked or
if the user has an activity pinned."
It seems to be safe to assume that those cases will be caught by
a RuntimeException handler (only RuntimeExceptions can be left without
explicit catch block).
Anyway the root cause for problems is the fact that the current process
for going to the picture in picture mode is not synchronised with
Activity's lifecycle. On Activity's "userLeaveHint" callback we dispatch
async task to the JS code which only then after dispatching some more
stuff eventually call native method that enter PiP. In case we spend too
much time on the JS side and the Activity goes to PAUSED state the call
will fail with IllegalStatException: "activity is not visible",
"activity is paused" etc. This means with this fix the app will not
crash, but we'll see it sometimes not go to the PiP mode as expected.
This only works automatically on Android >= 8. On other platforms / versions, it
relies on the SDK user on implementing a "reduced UI" mode and reacting to the
"request PIP" delegate method.
On Android we go into "immersive mode" when in a conference, this is our way of
being full-creen. There are occasions, however, in which Android takes us out of
immerfive mode without us (the application / SDK) knowing: when a child activity
is started, a modal window shown, etc.
In order to be resilient to any possible change in the immersive mode, register
a listener which will be called when Android changes it, so we can re-eavluate
if we need it and thus re-enable it.
Turns out this was a bit more involved than I originally thought due to an
interesting (corner) case: IFF the user was never asked about microphone
permissions and the call starts with audio muted, unmuting from the CallKit
interface won't work (iOS won't show the prompt, it fails immediately) and we
need to sync the mute state back.
* ref: Restructures the pinned/unpinned events.
* ref: Refactors the "audio only disabled" event.
* ref: Refactors the "stream switch delay" event.
* ref: Refactors the "select participant failed" event.
* ref: Refactors the "initially muted" events.
* ref: Refactors the screen sharing started/stopped events.
* ref: Restructures the "device list changed" events.
* ref: Restructures the "shared video" events.
* ref: Restructures the "start muted" events.
* ref: Restructures the "start audio only" event.
* ref: Restructures the "sync track state" event.
* ref: Restructures the "callkit" events.
* ref: Restructures the "replace track".
* ref: Restructures keyboard shortcuts events.
* ref: Restructures most of the toolbar events.
* ref: Refactors the API events.
* ref: Restructures the video quality, profile button and invite dialog events.
* ref: Refactors the "device changed" events.
* ref: Refactors the page reload event.
* ref: Removes an unused function.
* ref: Removes a method which is needlessly exposed under a different name.
* ref: Refactors the events from the remote video menu.
* ref: Refactors the events from the profile pane.
* ref: Restructures the recording-related events.
Removes events fired when recording with something other than jibri
(which isn't currently supported anyway).
* ref: Cleans up AnalyticsEvents.js.
* ref: Removes an unused function and adds documentation.
* feat: Adds events for all API calls.
* fix: Addresses feedback.
* fix: Brings back mistakenly removed code.
* fix: Simplifies code and fixes a bug in toggleFilmstrip
when the 'visible' parameter is defined.
* feat: Removes the resolution change application log.
* ref: Uses consistent naming for events' attributes.
Uses "_" as a separator instead of camel case or ".".
* ref: Don't add the user agent and conference name
as permanent properties. The library does this on its own now.
* ref: Adapts the GA handler to changes in lib-jitsi-meet.
* ref: Removes unused fields from the analytics handler initializaiton.
* ref: Renames the google analytics file and add docs.
* fix: Fixes the push-to-talk events and logs.
* npm: Updates lib-jitsi-meet to 515374c8d383cb17df8ed76427e6f0fb5ea6ff1e.
* fix: Fixes a recently introduced bug in the google analytics handler.
* ref: Uses "value" instead of "delay" since this is friendlier to GA.
Rearrange the ToolbarButtons in the secondary Toolbar in order to mostly
group the media-related ones such as the AudioRouteButton, the
switchCamera button, and the audio-only mode button.
Due to the difference in nature, the iOS and Android implementations are
completely different:
iOS: MPVolumeView is used, which allows us to place a button which will launch a
native route picker provided by iOS itself. This view is different depending on
the iOS version, with the iOS 11 version being more complete.
Android: A completely custom component is used, which displays a bottom sheet
with the device categories, not devices individually. This is akin to the sheet
in the builtin dialer.
This commit adds extra actions/Redux state to be able to deal with
the GUM operation being in progress. There will be early local track
stub in the Redux state for any a local track for which GUM has been
called, but not completed yet.
Local track is considered valid only after TRACK_ADDED event when it
will have JitsiLocalTrack instance set.
ESLint 4.8.0 discovers a lot of error related to formatting. While I
tried to fix as many of them as possible, a portion of them actually go
against our coding style. In such a case, I've disabled the indent rule
which effectively leaves it as it was before ESLint 4.8.0.
Additionally, remove jshint because it's becoming a nuisance with its
lack of understanding of ES2015+.
* Javadoc introduced @code as a replacement of <code> and <tt> which is
better aligned with other javadoc tags such as @link. Use it in the
Java source code. If we switch to Kotlin, then we'll definitely use
Markdown.
* There are more uses of @code in the JavaScript source code than <tt>
so use @code for the sake of consistency. Eventually, I'd rather we
switch to Markdown because it's easier on my eyes.
* Xcode is plain confused by @code and @link. The Internet says that
Xcode supports the backquote character to denote the beginning and end
of a string of characters which should be formatted for display as
code but it doesn't work for me. <tt> is not rendered at all. So use
the backquote which is rendered itself. Hopefully, if we switch to
Markdown, then it'll be common between JavaScript and Objective-C
source code.
This commit adds initial support for CallKit on supported platforms: iOS >= 10.
Since the call flow in Jitsi Meet is basically making outgoing calls, only
outgoing call support is currently handled via CallKit.
Features:
- "Green bar" when in a call.
- Native CallKit view when tapping on the call label on the lock screen.
- Support for audio muting from the native CallKit view.
- Support for recent calls (audio-only calls logged as Audio calls, others show
as Video calls).
- Call display name is room name.
- Graceful downgrade on systems without CallKit support.
Limitations:
- Native CallKit view cannot be shown for audio-only calls (this is a CallKit
limitaion).
- The video button in the CallKit view will start a new video call to the same
room, and terminate the previous one.
- No support for call hold.
The error is stored in the redux store in base/config so other components can
consult it. It is also broadcasted as a new event in the external API for the
SDK.
I'm not saying that the two commits in question were wrong or worse than
what I'm offering. Anyway, I think what I'm offering brings:
* Compliance with expectations i.e. the middleware doesn't compute the
next state from the current state, the reducer does;
* Clarity and/or simplicity i.e. there's no global variable (reqIndex),
there's no need for the term "index" (a.k.a "reqIndex") in the redux
store.
* By renaming net-interceptor to network-activity feels like it's
preparing the feature to implement a NetworkActivityIndicator React
Component which will take on more of the knowledge about the specifics
of what is the network activity redux state exactly, is it maintained by
interception or some other mechanism, and abstracts it in the feature
itself allowing outsiders to merely render a React Component.
Works only for XHR requests, which is the only network request mobile performs
(WebRTC traffic aside). The fetch API is implemented on top of XHR, so that's
covered too.
Requests are kept in the redux store until they complete, at which point they
are removed.
Simplify the code by using a bitfied instead of a couple of boolean flags. This
allows us to mute the video from multiple places and only make the unmute
effective once they have all unmuted.
Alas, this cannot be applied to the web without a massive refactor, because it
uses the track muted state as the source of truth instead of the media state.
Avatars are cached to the filesystem and loaded from there when requested again.
The cache is cleaned after a conference ends and on application startup
(defensive move).
In addition, implement a fully local avatar system, which is used as a fallback
when loading a remote avatar fails. It can also be forced using a prop.
The fully local avatars use a user icon as a mask and apply a background color
qhich is picked by hashing the URI passed to the avatar. If no URI is passed a
random color is chosen.
A grace period of 1 second is also implemented so a default local avatar will be
rendered if an Avatar component is mounted but has no URI. If a URI is specified
later on, it will be loaded and displayed. In case loading the remote avatar
fails, the locally generated one will be used.