This fix is based on storing the location URL object we are loading the
configuration for in the redux store. Once the config has been loaded (or it has
failed, for that matter!) we'll check if the current "config URL" is the same we
set, and discard the old one if they don't match.
Knowledge is power, man!
The config.js cache predates the feature base/known-domains.
Technically, it's also able to recall more domains that the feature
recent-list can (because the latter limits its entries).
If multiple JitsiMeetView instances are created (not necessarily
existing at once), it's possible to hit a TypeError when reading the
React Component props of the currently mounted App. Anyway, in certain
places we're already protecting against that out of abundance of caution
so it makes no sense to not protect everywhere.
* Button conditionally shown based on if the feature is enabled and available
* Hooks for launching the invite UI (delegates to the native layer)
* Hooks for using the search and dial out checks from the native layer (calls back into JS)
* Hooks for handling sending invites and passing any failures back to the native layer
* Android and iOS handling for those hooks
Author: Ryan Peck <rpeck@atlassian.com>
Author: Eric Brynsvold <ebrynsvold@atlassian.com>
* feat(Deeplinking): Implement for web.
* ref(unsupported_browser): Move the mobile version to deeplinking feature
* feat(deeplinking_mobile): Redesign.
* fix(deeplinking): Use interface.NATIVE_APP_NAME.
* feat(dial_in_summary): Add the PIN to the number link.
* fix(deep_linking): Handle use case when there isn't deep linking image.
* fix(deep_linking): css
* fix(deep_linking): deeplink -> "deep linking"
* fix(deeplinking_css): Remove position: fixed
* docs(deeplinking): Add comment for the openWebApp action.
On Android the files will be copied to the assets/sounds directory of
the SDK bundle on build time. To play the "asset:/" prefix has to be
used to locate the files correctly.
On iOS each sound file must be added to the SDK's Xcode project in order
to be bundled correctly. To playback we need to know the path of the SDK
bundle which is now exposed by the AppInfo iOS module.
Adds base/sounds feature which allows other features to register a sound
source under specified id. A new SoundsCollection component will then
render corresponding HTMLAudioElement for each such sound. Once "setRef"
callback is called by the HTMLAudioElement, this element will be added
to the Redux store. When that happens sound can be played through the
new 'playSound' action which will call play() method on the stored
HTMLAudioElement instance.
This only works automatically on Android >= 8. On other platforms / versions, it
relies on the SDK user on implementing a "reduced UI" mode and reacting to the
"request PIP" delegate method.
We started on the way to responsive UI and its design with aspect ratio
and keeping the filmstrip on the short side of the app's visible
rectangle.
Shortly, we're going to introduce reduced UI for Picture-in-Picture. And
that's where we'll need another dimensions-based detector akin to the
aspect ratio detector.
While the AspectRatioDetector, the up-and-coming ReducedUIDetector, and
their base DimensionsDetector are definitely separate abstractions and
implementations not mixed for the purposes of easy extensibility and
maintenance, the three of them are our building blocks on top of which
we'll build our responsive UI.
They will be stored in redux and the PageReloadOverlay will be displayed.
Note that this commit also introduces a subtle (and yet important!) change:
the location URL is now always set, regardless of the configuration loading or
not. This is needed in order for the retry logic to pick it up.
On web Conference is pretty much all there is, but on mobile we have the welcome
page and the blank page. If we fail to load config.js, for example we will still
be in the welcome page *and* we want to show an error overlay.
Adds the ability to detect app area's aspect ratio on react-native
through the features/base/aspect-ratio.
Makes conference, filmstrip and toolbox react to the aspect ratio
changes and display filmstrip on the shorter side of the screen.
ESLint 4.8.0 discovers a lot of error related to formatting. While I
tried to fix as many of them as possible, a portion of them actually go
against our coding style. In such a case, I've disabled the indent rule
which effectively leaves it as it was before ESLint 4.8.0.
Additionally, remove jshint because it's becoming a nuisance with its
lack of understanding of ES2015+.
* Javadoc introduced @code as a replacement of <code> and <tt> which is
better aligned with other javadoc tags such as @link. Use it in the
Java source code. If we switch to Kotlin, then we'll definitely use
Markdown.
* There are more uses of @code in the JavaScript source code than <tt>
so use @code for the sake of consistency. Eventually, I'd rather we
switch to Markdown because it's easier on my eyes.
* Xcode is plain confused by @code and @link. The Internet says that
Xcode supports the backquote character to denote the beginning and end
of a string of characters which should be formatted for display as
code but it doesn't work for me. <tt> is not rendered at all. So use
the backquote which is rendered itself. Hopefully, if we switch to
Markdown, then it'll be common between JavaScript and Objective-C
source code.
This commit adds initial support for CallKit on supported platforms: iOS >= 10.
Since the call flow in Jitsi Meet is basically making outgoing calls, only
outgoing call support is currently handled via CallKit.
Features:
- "Green bar" when in a call.
- Native CallKit view when tapping on the call label on the lock screen.
- Support for audio muting from the native CallKit view.
- Support for recent calls (audio-only calls logged as Audio calls, others show
as Video calls).
- Call display name is room name.
- Graceful downgrade on systems without CallKit support.
Limitations:
- Native CallKit view cannot be shown for audio-only calls (this is a CallKit
limitaion).
- The video button in the CallKit view will start a new video call to the same
room, and terminate the previous one.
- No support for call hold.
The error is stored in the redux store in base/config so other components can
consult it. It is also broadcasted as a new event in the external API for the
SDK.
This only helps iff there is a short transient network error which prevents the
configuration from being loaded. In such case, use the cached version in
localStorage, which may not match the shard, but it's (probably!) better than
nothing.
In case there is no Internet connectivity, an error will be produced as soon as
the XMPP connection is attempted anyway.
This patch loads the config later than we used to, that is, only once we
know the room the user is about to join.
Due to architectural limitations in lib-jitsi-meet, it needs to be
initialized with a configuration in order to properly function. This is
unfortunate because we need to create a video track in the welcome page,
but don't know the room (hence no config) yet. In order to circumvent
this problem an empty configuration is used, which is later swapped with
the appropriate one, once loaded.
Some interesting side-effects of this change are a perceived speed
increase when the app starts or a conference is hangup. They are both
due to the fact that no config needs to be fetched from a remote server
in those cases.
In order to load the configuration from the shard that will actually
host the conference, it's imperative that we add the room= query
parameter:
https://meet.jit.si/config.js?room=example
This implies a departure from our current model, where the config is
discarded if the domain for the next conference is different, but kept
otherwise.
* feat(analytics): move to React
The analytics handlers have been moved to JitsiMeetGlobalNS, so now they are
stored in `window.JitsiMeetJS.app.analyticsHandlers`.
The analytics handlers are re-downloaded and re-initialized on every
lib-jitsi-meet initialization, which happens every time the config is changed
(moving between deployments in the mobile app).
* Adds legacy support for old analytics location.
We have already made the implicit decision not to pursue what the
comment describes. If we ever revisit it, it probably won't be handled
where the comment is anyway.
I'm not saying that the two commits in question were wrong or worse than
what I'm offering. Anyway, I think what I'm offering brings:
* Compliance with expectations i.e. the middleware doesn't compute the
next state from the current state, the reducer does;
* Clarity and/or simplicity i.e. there's no global variable (reqIndex),
there's no need for the term "index" (a.k.a "reqIndex") in the redux
store.
* By renaming net-interceptor to network-activity feels like it's
preparing the feature to implement a NetworkActivityIndicator React
Component which will take on more of the knowledge about the specifics
of what is the network activity redux state exactly, is it maintained by
interception or some other mechanism, and abstracts it in the feature
itself allowing outsiders to merely render a React Component.
Works only for XHR requests, which is the only network request mobile performs
(WebRTC traffic aside). The fetch API is implemented on top of XHR, so that's
covered too.
Requests are kept in the redux store until they complete, at which point they
are removed.
I see it as the first step in simplifying the route navigate of the
JavaScript app by removing BlankWelcomePage from _getRouteToRender. From
a faraway point of view, the app is at the route at which it is not in a
conference. Historically, the route was known as the Welcome page. But
mobile complicated the route by saying that actually it may not want to
see the room name input and join button.
Additionally, I renamed BlankWelcomePage to BlankPage because I don't
think of it as a WelcomePage alternative but rather as a more generic
BlankPage which may be utilized elsewhere in the future.
I plan for the next steps to:
* Merge Entryway, _interceptComponent, and _getRouteToRender in one
React Component rendered by AbstractApp so that the whole logic is in
one file;
* Get rid of RouteRegistry and routes.
When do we need tracks?
- Welcome page (only the video track)
- Conference (depends if starting with audio / video muted is requested)
When do we need to destroy the tracks?
- When we are not in a conference and there is no welcome page
In order to accommodate all the above use cases, a new component is introduced:
BlankWelcomePage. Its purpose is to take the place of the welcome page when it
is disabled. When this component is mounted local tracks are destroyed.
Analogously, a video track is created when the (real) welcome page is created,
and all the desired tracks are created then the Conference component is created.
What are desired tracks? These are the tracks we'd like to use for the
conference that is about to happen. By default both audio and video are desired.
It's possible, however, the user requested to start the call with no
video/audio, in which case it's muted in base/media and a track is not created.
The first time the app starts (with the welcome page) it will request permission
for video only, since there is no need for audio in the welcome page. Later,
when a conference is joined permission for audio will be requested when an audio
track is to be created. The audio track is not destroyed when the conference
ends. Yours truly thinks this is not needed since it's a stopped track which is
not using system resources.
Deep/universal linking now utilizes loadURL (when possible). But loadURL
is imperative in the native source code while its JavaScript counterpart
i.e. React App Component prop url is declarative. So there's the
following bug: open a URL, leave the conference (by tapping the hangup
button, for example), and then opening the same URL actually leaves you
on the Welcome page (if enabled; otherwise, a black screen).
The implementation has a flow though: opening the same URL twice in a
row without an intervening leave will leave the first opening and join
the new opening. Which can be improved by not leaving and joining if the
conference is joined, joining, an not leaving. But that can be done
separately as an improvement independent of the current implementation
details.
As https://facebook.github.io/react/docs/typechecking-with-proptypes.html
says, React.PropTypes have moved into the npm package prop-types since
React v15.5. I've already failed to update certain devDependencies
because they mandate the use of prop-types so I'd rather we (gradually
at least) move to prop-types rather than face a lot of work later on.
Avatars are cached to the filesystem and loaded from there when requested again.
The cache is cleaned after a conference ends and on application startup
(defensive move).
In addition, implement a fully local avatar system, which is used as a fallback
when loading a remote avatar fails. It can also be forced using a prop.
The fully local avatars use a user icon as a mask and apply a background color
qhich is picked by hashing the URI passed to the avatar. If no URI is passed a
random color is chosen.
A grace period of 1 second is also implemented so a default local avatar will be
rendered if an Avatar component is mounted but has no URI. If a URI is specified
later on, it will be loaded and displayed. In case loading the remote avatar
fails, the locally generated one will be used.
Web's ExternalAPI accepts an object with properties as one of its
constructor arguments and from which it generated a URL. Mobile's
JitsiMeetView.loadURLObject is supposed to accept pretty much the same
object.
Introduces loadURLObject in JitsiMeetView on Android and iOS which
accepts a Bundle and NSDictionary, respectively, similar in structure to
the JS object accepted by the constructor of Web's ExternalAPI. At this
time, only the property url of the bundle/dictionary is supported.
However, it allows the public API of loadURLObject to be consumed. The
property url will be made optional in the future and other properties
will be supported from which a URL will be constructed.
The end goal of this patch was to avoid opening the camera when there is no
welcome page.
In order to achieve this, the logic for creating the local tracks was
refactored:
Before this patch local tracks were created when lib-jitsi-meet was initialized,
and destroyed when it was deinitialized. As a side note, this meant that when a
conference in a non-default domain was joined, local tracks were destroyed and
recreated in quick succession.
Now, local trans are created and destroyed based on what the next route will be,
and this happens when the target room has been decided. This allows us to create
local tracks the moment we need to render any route, and destroy them when there
is no route to be rendered. As an interesting byproduct, this refactor also
avoids the destruction + recreation of local tracks when a conference in a
non-default domain was left.
React (pun intended) to prop changes, that is, load the new specified URL.
In addition, fix a hidden bug in loading the initial URL from the linking
module: we prefer a prop to the URL the app was launched with, in case somehow
both are specified. We (the Jitsi Meet app) are not going to run into this
corner case, but let's be defensive just in case.
The current implementation doesn't use the API and Transport modules. This is
due to the fact that they are too tied to APP at the moment, which is web only.
Once API is refactored and moved into the Redux store this will be adjusted,
though it's unlikely that the lowest level React Native module (ExternalAPI)
changes drastically.
This commit also introduces a stopgap limitation of only allowing a single
instance for JitsiMeetView objects on both Android and iOS. React Native doesn't
really play well with having multiple instances of the same modules on the same
bridge, since they behave a bit like singletons. Even if we were to use multiple
bridges, some features depend on system-level global state, such as the
AVAudioSession mode or Android's immersive mode. Further attempts will be made
at lifting this limitation in the future, though.
The counterpart of the external API in the Jitsi Meet Web app uses the
search URL param jwt to heuristically detect that the Web app is very
likely embedded (as an iframe) and, consequently, needs to forcefully
enable itself. It was looking at whether there was a JSON Web Token
(JWT) but that logic got broken when the JWT support was rewritten
because the check started happening before the search URL param jwt was
parsed.
There were getDomain, setDomain, SET_DOMAIN, setRoomURL, SET_ROOM_URL
which together were repeating one and the same information and in the
case of the 'room URL' abstraction was not 100% accurate because it
would exist even when there was no room. Replace them all with a
'location URL' abstraction which exists with or without a room.
Then the 'room URL' abstraction was not used in (mobile) feature
share-room. Use the 'location URL' there now.
Finally, removes source code duplication in supporting the Web
application context root.